Skip to main content

Home/ science 2.0/ Group items matching "Reproducibility" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
iaravps

Research 2.0.3: The future of research communication : Soapbox Science - 0 views

  • Open Access has led directly to an increase in usage of platforms that make is easy for researchers to comply with this mandate by depositing open access versions of their papers. Examples of companies in this space are Academia.edu, ResearchGate.net and Mendeley.  Open Access also means that anyone can contribute to the post-publication evaluation of research articles.
  • There are a number of initiatives focused on improving the process of peer review. Post-publication peer review, in which journals publish papers after minimal vetting and then encourage commentary from the scientific community, has been explored by several publishers, but has run into difficulties incentivizing sufficient numbers of experts to participate.  Initiatives like Faculty of 1000 have tried to overcome this by corralling experts as part of post-publication review boards.  And sometimes, as in the case of arsenic-based life, the blogosphere has taken peer review into its own hands.
  • Traditionally the number of first and senior author publications, and the journal(s) in which those publications appear, has been the key criteria for assessing the quality of a researcher’s work. This is used by funding agencies to determine whether to award research grants to conduct their future work, as well as by academic research institutions to inform hiring and career progression decisions. However, this is actually a very poor measure of a researcher’s true impact since a) it only captures a fraction of a researcher’s contribution and b) since more than 70% of published research cannot be reproduced, the publication based system rewards researchers for the wrong thing (the publication of novel research, rather than the production of robust research).
  • ...2 more annotations...
  • The h-index was one of the first alternatives proposed as a measure of scientific research impact.  It and its variants rely on citation statistics, which is a good start, but includes a delay which can be quite long, depending on the rapidity with which papers are published in a particular field.  There are a number of startups that are attempting to improve the way a researcher’s reputation is measured. One is ImpactStory which is attempting to aggregate metrics from researcher’s articles, datasets, blog posts, and more. Another is ResearchGate.net which has developed its own RG Score.
  • Which set of reputational signifiers rise to the top will shape the future of science itself.
katarzyna szkuta

PsychFileDrawer.org - An Archive of Brief Reports of Replication Attempts in Experimental Psychology - Now Open for Beta Testing - 0 views

  •  
    Now Open for Beta Testing The website is designed to make it quick and convenient to upload reports but also to require enough detail to make the report credible and responsible.The site also provides a discussion forum for each posting, allowing users to discuss the report (potentially allowing collective brainstorming about possible moderator variables, defects in the original study or in the non-replication attempt, etc.)
david osimo

Research 2.0.2: How research is conducted : Soapbox Science - 0 views

  • Traditionally, research was conducted by a single scientist or a small team of scientists within a single laboratory. The scientist(s) would conduct the majority of required experiments themselves, even if they did not initially have the necessary expertise or equipment. If they could not conduct an experiment themselves, they would attempt to find a collaborator in another lab to help them by using a barter system. This barter system essentially involves one scientist asking for a favor from another scientist, with the potential upside being co-authorship on any publications that are produced by the work. This type of collaborative arrangement depends heavily on personal networks developed by scientists.
  • The amount of collaboration required in research will continue to increase, driven by many factors including: The need for ever more complex and large scale instrumentation to delve deeper into biological and physical processes The maturation of scientific disciplines requiring more and more knowledge in order to make significant advances, a demand which can often only be met by pooling knowledge with others An increasing desire to obtain cross-fertilization across disciplines
  • So with large teams of scientists, often based at remote institutions, increasingly needing to work together to solve complex problems, there will be a demand for new tools to help facilitate collaboration. Specifically, there will be an increasing need for tools that allow researchers to easily find and access other scientists with the expertise required to advance their research projects. In my view, to operate most efficiently these tools also need new methods to reward researchers for participating in these collaborations.
  • ...1 more annotation...
  • One result of the rise in research requiring the combination of multiple specialized areas of expertise on ever shortening time-scales is, unfortunately, a concomitant decrease in the reproducibility of the published results (New York Times, Wall Street Journal and Nature.).  It is now apparent that independent validation of key experimental findings is an essential step that will be placed in the research process.
1 - 7 of 7
Showing 20 items per page