Skip to main content

Home/ Data Working Group/ Group items tagged sharing

Rss Feed Group items tagged

Lisa Johnston

Scientific Data Sharing Project - 0 views

  •  
    The Data Sharing Project proposes to further this goal initially in the field of medicine by working to create a raw data sharing program that will serve as a model to other disciplines attempting to make their own way in this arena.
Amy West

The Enduring Value of Social Science Research: The Use and Reuse of Primary Research Da... - 2 views

  •  
    Paper on data sharing from social sciences perspective; also some analysis of sharing so far.
Lisa Johnston

Digital Curation Centre: DCC SCARP Project - 0 views

  •  
    18 January 2010 | Key perspectives | Type: report The Digital Curation Centre is pleased to announce the report "Data Dimensions: Disciplinary Differences in Research Data Sharing, Reuse and Long term Viability" by Key Perspectives, as one of the final outputs of the DCC SCARP project. The project investigated attitudes and approaches to data deposit, sharing and reuse, curation and preservation, over a range of research fields in differing disciplines. The synthesis report (which drew on the SCARP case studies plus a number of others, identified in the Appendix), identifies factors that help understand how curation practices in research groups differ in disciplinary terms. This provides a backdrop to different digital curation approaches.
Lisa Johnston

Nature 461 (2009): Data sharing: Empty archives - 0 views

  •  
    new feature on open access to data and difficulties with sharing
Lisa Johnston

The Current Status of Scientific Data Sharing and Spatial Data Services in China | Moun... - 2 views

  •  
    There are a lot of scientific data repositories in China that have been growing since the 90's. Interesting sea change of ethics and sharing practices.
Amy West

PLoS Computational Biology: Defrosting the Digital Library: Bibliographic Tools for the... - 0 views

  • Presently, the number of abstracts considerably exceeds the number of full-text papers,
  • full papers that are available electronically are likely to be much more widely read and cited
  • Since all of these libraries are available on the Web, increasing numbers of tools for managing digital libraries are also Web-based. They rely on Uniform Resource Identifiers (URIs [25] or “links”) to identify, name, and locate resources such as publications and their authors.
  • ...27 more annotations...
  • We often take URIs for granted, but these humble strings are fundamental to the way the Web works [58] and how libraries can exploit it, so they are a crucial part of the cyberinfrastructure [59] required for e-science on the Web.
  • link to data (the full-text of a given article),
  • To begin with, a user selects a paper, which will have come proximately from one of four sources: 1) searching some digital library, “SEARCH” in Figure 4; 2) browsing some digital library (“BROWSE”); 3) a personal recommendation, word-of-mouth from colleague, etc., (“RECOMMEND”); 4) referred to by reading another paper, and thus cited in its reference list (“READ”)
  • There is no universal method to retrieve a given paper, because there is no single way of identifying publications across all digital libraries on the Web
  • Publication metadata often gets “divorced” from the data it is about, and this forces users to manage each independently, a cumbersome and error-prone process.
  • There is no single way of representing metadata, and without adherence to common standards (which largely already exist, but in a plurality) there never will be.
  • Where DOIs exist, they are supposed to be the definitive URI. This kind of automated disambiguation, of publications and authors, is a common requirement for building better digital libraries
  • Publication metadata are essential for machines and humans in many tasks, not just the disambiguation described above. Despite their importance, metadata can be frustratingly difficult to obtain.
  • So, given an arbitrary URI, there are only two guaranteed options for getting any metadata associated with it. Using http [135], it is possible to for a human (or machine) to do the following.
  • This technique works, but is not particularly robust or scalable because every time the style of a particular Web site changes, the screen-scraper will probably break as well
  • This returns metadata only, not the whole resource. These metadata will not include the author, journal, title, date, etc., of
  • As it stands, it is not possible to perform mundane and seemingly simple tasks such as, “get me all publications that fulfill some criteria and for which I have licensed access as PDF” to save locally, or “get me a specific publication and all those it immediately references”.
  • Having all these different metadata standards would not be a problem if they could easily be converted to and from each other, a process known as “round-tripping”.
  • many of these mappings are non-trivial, e.g., XML to RDF and back again
  • more complex metadata such as the inbound and outbound citations, related articles, and “supplementary” information.
  • Personalization allows users to say this is my library, the sources I am interested in, my collection of references, as well as literature I have authored or co-authored. Socialization allows users to share their personal collections and see who else is reading the same publications, including added information such as related papers with the same keyword (or “tag”) and what notes other people have written about a given publication.
  • CiteULike normalizes bookmarks before adding them to its database, which means it calculates whether each URI bookmarked identifies an identical publication added by another user, with an equivalent URI. This is important for social tagging applications, because part of their value is the ability to see how many people (and who) have bookmarked a given publication. CiteULike also captures another important bibliometric, viz how many users have potentially read a publication, not just cited it.
  • Connotea uses MD5 hashes [157] to store URIs that users bookmark, and normalizes them after adding them to its database, rather than before.
  • he source code for Connotea [159] is available, and there is an API that allows software engineers to build extra functionality around Connnotea, for example the Entity Describer [160].
  • Personalization and socialization of information will increasingly blur the distinction between databases and journals [175], and this is especially true in computational biology where contributions are particularly of a digital nature.
  • This is usually because they are either too “small” or too “big” to fit into journals.
  • As we move in biology from a focus on hypothesis-driven to data-driven science [1],[181],[182], it is increasingly recognized that databases, software models, and instrumentation are the scientific output, rather than the conventional and more discursive descriptions of experiments and their results.
  • In the digital library, these size differences are becoming increasingly meaningless as data, information, and knowledge become more integrated, socialized, personalized, and accessible. Take Postgenomic [183], for example, which aggregates scientific blog posts from a wide variety of sources. These posts can contain commentary on peer-reviewed literature and links into primary database sources. Ultimately, this means that the boundaries between the different types of information and knowledge are continually blurring, and future tools seem likely to continue this trend.
  • he identity of people is a twofold problem because applications need to identify people as users in a system and as authors of publications.
  • Passing valuable data and metadata onto a third party requires that users trust the organization providing the service. For large publishers such as Nature Publishing Group, responsible for Connotea, this is not necessarily a problem.
  • business models may unilaterally change their data model, making the tools for accessing their data backwards incompatible, a common occurrence in bioinformatics.
  • Although the practice of sharing raw data immediately, as with Open Notebook Science [190], is gaining ground, many users are understandably cautious about sharing information online before peer-reviewed publication.
  •  
    Yes, but Alexandria was also a lot smaller; not totally persuaded by analogy here...
Lisa Johnston

Sharing Data for Disease Research - WSJ.com - 0 views

  •  
    A bold project hopes that getting scientists to share information can deepen their understanding of diseases
David Govoni

The Globus Alliance - 0 views

  •  
    "The Globus Alliance is a community of organizations and individuals developing fundamental technologies behind the 'Grid,' which lets people share computing power, databases, instruments, and other on-line tools ..."
David Govoni

Scratchpads | Biodiversity Online - 0 views

  •  
    "Scratchpads are an easy to use, social networking application that enable communities of researchers to manage, share and publish taxonomic data online. Sites are hosted at the Natural History Museum London, and offered free to any scientist that complet
David Govoni

iSGTW - International Science Grid This Week - 0 views

  •  
    "A weekly newsletter promoting grid computing, iSGTW shares stories of grid-empowered research, scientific discoveries, and grid technology from around the world. The iSGTW weekly e-newsletter is emailed free to subscribers and is also available via RSS."
David Govoni

Biodiversity Information Standards | TDWG - 0 views

  •  
    "Biodiversity Information Standards (TDWG) is an international not-for-profit group that develops standards and protocols for sharing biodiversity data."
Lisa Johnston

Chronopolis -- Digital Preservation Program -- Long-Term Mass-Scale Federated Digital P... - 0 views

  •  
    The Chronopolis Digital Preservation Demonstration Project, one of the Library of Congress' latest efforts to collect and preserve at-risk digital information, has been officially launched as a multi-member partnership to meet the archival needs of a wide range of cultural and social domains. Chronopolis is a digital preservation data grid framework being developed by the San Diego Supercomputer Center (SDSC) at UC San Diego , the UC San Diego Libraries (UCSDL) , and their partners at the National Center for Atmospheric Research (NCAR) in Colorado and the University of Maryland's Institute for Advanced Computer Studies (UMIACS) . A key goal of the Chronopolis project is to provide cross-domain collection sharing for long-term preservation. Using existing high-speed educational and research networks and mass-scale storage infrastructure investments, the partnership is designed to leverage the data storage capabilities at SDSC, NCAR, and UMIACS to provide a preservation data grid that emphasizes heterogeneous and highly redundant data storage systems.
Lisa Johnston

http://www.si.edu/opanda/docs/Rpts2011/DataSharingFinal110328.pdf - 1 views

  •  
    Smithsonian will develop a policy for biology data sharing. Also more standardization for small-science research groups.
Amy West

Data Sharing for Demographic Research - 3 views

  •  
    Note extensive use of citation standards, download statistics and citation statistics when they can get them. Also, co-sponsored by Minnesota Population Center (MPC) and some of the data totally free to all users.
1 - 20 of 33 Next ›
Showing 20 items per page