Skip to main content

Home/ science 2.0/ Group items tagged article

Rss Feed Group items tagged

david osimo

- Article of the Future - 0 views

  •  
    Resulting from the Article of the Future project innovations, we are now able to announce the SciVerse ScienceDirect redesigned article page, with a new layout including a navigational pane and an optimized reading middle pane. The Article of the Future project- an ongoing initiative aiming to revolutionize the traditional format of the academic paper in regard to three key elements: presentation, content and context.
katarzyna szkuta

PLOS article level metrics - 0 views

  •  
    Research in context The Public Library of Science (PLoS) is the first publisher to place transparent and comprehensive information about the usage and reach of published articles onto the articles themselves, so that the entire academic community can assess their value.
david osimo

Filter-then-publish vs. publish-then-filter | Sauropod Vertebra Picture of the Week - 2 views

  •  
    "Unlike many journals which attempt to use the peer review process to determine whether or not an article reaches the level of 'importance' required by a given journal, PLoS ONE uses peer review to determine whether a paper is technically sound and worthy of inclusion in the published scientific record. Once the work is published in PLoS ONE, the broader community is then able to discuss and evaluate the significance of the article (through the number of citations it attracts; the downloads it achieves; the media and blog coverage it receives; and the post-publication Notes, Comments and Ratings that it receives on PLoS ONE etc)."
iaravps

Research 2.0.3: The future of research communication : Soapbox Science - 0 views

  • Open Access has led directly to an increase in usage of platforms that make is easy for researchers to comply with this mandate by depositing open access versions of their papers. Examples of companies in this space are Academia.edu, ResearchGate.net and Mendeley.  Open Access also means that anyone can contribute to the post-publication evaluation of research articles.
  • There are a number of initiatives focused on improving the process of peer review. Post-publication peer review, in which journals publish papers after minimal vetting and then encourage commentary from the scientific community, has been explored by several publishers, but has run into difficulties incentivizing sufficient numbers of experts to participate.  Initiatives like Faculty of 1000 have tried to overcome this by corralling experts as part of post-publication review boards.  And sometimes, as in the case of arsenic-based life, the blogosphere has taken peer review into its own hands.
  • Traditionally the number of first and senior author publications, and the journal(s) in which those publications appear, has been the key criteria for assessing the quality of a researcher’s work. This is used by funding agencies to determine whether to award research grants to conduct their future work, as well as by academic research institutions to inform hiring and career progression decisions. However, this is actually a very poor measure of a researcher’s true impact since a) it only captures a fraction of a researcher’s contribution and b) since more than 70% of published research cannot be reproduced, the publication based system rewards researchers for the wrong thing (the publication of novel research, rather than the production of robust research).
  • ...2 more annotations...
  • The h-index was one of the first alternatives proposed as a measure of scientific research impact.  It and its variants rely on citation statistics, which is a good start, but includes a delay which can be quite long, depending on the rapidity with which papers are published in a particular field.  There are a number of startups that are attempting to improve the way a researcher’s reputation is measured. One is ImpactStory which is attempting to aggregate metrics from researcher’s articles, datasets, blog posts, and more. Another is ResearchGate.net which has developed its own RG Score.
  • Which set of reputational signifiers rise to the top will shape the future of science itself.
david osimo

PLoS ONE: A Collaboratively-Derived Science-Policy Research Agenda - 0 views

  •  
    interesting article with many author
katarzyna szkuta

F1000 - Post-publication peer review of the biomedical literature - 0 views

  •  
    The core service of Faculty of 1000 (F1000) identifies and evaluates the most important articles in biology and medical research publications. The selection process comprises a peer-nominated global 'Faculty' of the world's leading scientists and clinicians who rate the best of the articles they read and explain their importance.
david osimo

Replication backlash « Statistical Modeling, Causal Inference, and Social Science... - 0 views

  •  
    "if your finding is... fragile... researchers should know [that] right away from reading the article." http://t.co/kvohH8iJON @StatModeling
iaravps

Rise of 'Altmetrics' Revives Questions About How to Measure Impact of Research - Techno... - 0 views

  • "Campuswide there's a little sensitivity toward measuring faculty output," she says. Altmetrics can reveal that nobody's talking about a piece of work, at least in ways that are trackable—and a lack of interest is hardly something researchers want to advertise in their tenure-and-promotion dossiers. "What are the political implications of having a bunch of stuff online that nobody has tweeted about or Facebooked or put on Mendeley?"
    • iaravps
       
      What about uncited papers?
  • "The folks I've talked to are like, 'Yes, it does have some value, but in terms of the reality of my tenure-and-promotion process, I have to focus on other things,'" she says.
  • As that phrasing indicates, altmetrics data can't reveal everything. Mr. Roberts points out that if someone tweets about a paper, "they could be making fun of it." If a researcher takes the time to download a paper into an online reference manager like Mendeley or Zotero, however, he considers that a more reliable sign that the work has found some kind of audience. "My interpretation is that because they downloaded it, they found it useful," he says.
  • ...1 more annotation...
  • It's an interesting story in itself how the desire of librarians 50 years ago to know what journals to buy now propels the entire scientific enterprise across the globe.
Francesco Mureddu

CiteULike: Peekaboom: a game for locating objects in images - 0 views

  •  
    We introduce Peekaboom, an entertaining web-based game that can help computers locate objects in images. People play the game because of its entertainment value, and as a side effect of them playing, we collect valuable image metadata, such as which pixels belong to which object in the image. The collected data could be applied towards constructing more accurate computer vision algorithms, which require massive amounts of training and testing data not currently available. Peekaboom has been played by thousands of people, some of whom have spent over 12 hours a day playing, and thus far has generated millions of data points. In addition to its purely utilitarian aspect, Peekaboom is an example of a new, emerging class of games, which not only bring people together for leisure purposes, but also exist to improve artificial intelligence. Such games appeal to a general audience, while providing answers to problems that computers cannot yet solve.
katarzyna szkuta

Rosetta@home - 0 views

  •  
    distributed-computing projects in which volunteers download a small piece of software and let their home computers do some extracurricular work when the machines would otherwise be idle (after Nature article of Eric Hand)
katarzyna szkuta

DeepDyve - The simplest way to get the articles you need - 0 views

  •  
    DeepDyve is the largest online rental service for scientific, technical and medical research.
Francesco Mureddu

Identifying population differences in whole-brain... [Neuroimage. 2010] - PubMed - NCBI - 0 views

  •  
    Models of whole-brain connectivity are valuable for understanding neurological function, development and disease. This paper presents a machine learning based approach to classify subjects according to their approximated structural connectivity patterns and to identify features which represent the key differences between groups. Brain networks are extracted from diffusion magnetic resonance images obtained by a clinically viable acquisition protocol. Connections are tracked between 83 regions of interest automatically extracted by label propagation from multiple brain atlases followed by classifier fusion. Tracts between these regions are propagated by probabilistic tracking, and mean anisotropy measurements along these connections provide the feature vectors for combined principal component analysis and maximum uncertainty linear discriminant analysis. The approach is tested on two populations with different age distributions: 20-30 and 60-90 years. We show that subjects can be classified successfully (with 87.46% accuracy) and that the features extracted from the discriminant analysis agree with current consensus on the neurological impact of ageing.
david osimo

Research impact: Altmetrics make their mark : Naturejobs - 0 views

  •  
    "Research Excellence Framework (REF), an evaluation of UK academia that influences funding"
david osimo

Research 2.0.2: How research is conducted : Soapbox Science - 0 views

  • Traditionally, research was conducted by a single scientist or a small team of scientists within a single laboratory. The scientist(s) would conduct the majority of required experiments themselves, even if they did not initially have the necessary expertise or equipment. If they could not conduct an experiment themselves, they would attempt to find a collaborator in another lab to help them by using a barter system. This barter system essentially involves one scientist asking for a favor from another scientist, with the potential upside being co-authorship on any publications that are produced by the work. This type of collaborative arrangement depends heavily on personal networks developed by scientists.
  • The amount of collaboration required in research will continue to increase, driven by many factors including: The need for ever more complex and large scale instrumentation to delve deeper into biological and physical processes The maturation of scientific disciplines requiring more and more knowledge in order to make significant advances, a demand which can often only be met by pooling knowledge with others An increasing desire to obtain cross-fertilization across disciplines
  • So with large teams of scientists, often based at remote institutions, increasingly needing to work together to solve complex problems, there will be a demand for new tools to help facilitate collaboration. Specifically, there will be an increasing need for tools that allow researchers to easily find and access other scientists with the expertise required to advance their research projects. In my view, to operate most efficiently these tools also need new methods to reward researchers for participating in these collaborations.
  • ...1 more annotation...
  • One result of the rise in research requiring the combination of multiple specialized areas of expertise on ever shortening time-scales is, unfortunately, a concomitant decrease in the reproducibility of the published results (New York Times, Wall Street Journal and Nature.).  It is now apparent that independent validation of key experimental findings is an essential step that will be placed in the research process.
1 - 20 of 20
Showing 20 items per page