Skip to main content

Home/ Groups/ Data Working Group
Stephen Hearn

Developing Best Practices for Supplemental Materials - 0 views

  •  
    An interesting presentation (19 min.) by Linda Beebe, Senior Director of PsycINFO for the American Psychological Association, that places dataset management in the larger and shifting context of managing journal articles' supplemental materials.
Lisa Johnston

HPCwire: SDSC Cloud Supports New NSF Mandate for Data Management - 0 views

  •  
    Standard "on-demand" storage costs for UC researchers on the SDSC Cloud start at only $3.25 a month per 100GB (gigabytes) of storage. A "condo" option, which allows users to make cost-effective long term investment in hardware that becomes part of the SDSC Cloud, is also available. Full details can be found at https://cloud.sdsc.edu/hp/index.php.
Amy West

Interagency Data Stewardship/Citations/provider guidelines - Federation of Earth Scienc... - 0 views

    • Amy West
       
      Little confused by what's meant by "data sets should be cited like books" since they go on to provide really good reasons why data aren't like books, e.g. need subsetting information, access date for dynamic databases.
  • The guidelines build from the IPY Guidelines and are compatible with the DataCite Metadata Scheme for the Publication and Citation of Research Data, Version 2.2, July 2011.
  • In some cases, the data set authors may have also published a paper describing the data in great detail. These sort of data papers should be encouraged, and both the paper and the data set should be cited when the data are used.
  • ...27 more annotations...
  • Ongoing updates to a time series do change the content of the data set, but they do not typically constitute a new version or edition of a data set. New versions typically reflect changes in sampling protocols, algorithms, quality control processes, etc. Both a new version and an update may be reflected in the release date.
  • Locator, Identifier, or Distribution Medium
  • Then it is necessary to include a persistant reference to the location of the data.
  • This may be the most challenging aspect of data citation. It is necessary to enable "micro-citation" or the ability to refer to the specific data used--the exact files, granules, records, etc.
  • Data stewards should suggest how to reference subsets of their data. With Earth science data, subsets can often be identified by referring to a temporal and spatial range.
  • A particular data set may be part of a compilation, in which case it is appropriate to cite the data set somewhat like a chapter in an edited volume.
  • Increasingly, publishers are allowing data supplements to be published along with peer-reviewed research papers. When using the data supplement one need only cite the parent reference. F
  • Confusingly, a Digital Object Identifier is a locator. It is a Handle based scheme whereby the steward of the digital object registers a location (typically a URL) for the object. There is no guarantee that the object at the registered location will remain unchanged. Consider a continually updated data time series, for example.
  • While it is desirable to uniquely identify the cited object, it has proven extremely challenging to identify whether two data sets or data files are scientifically identical.
  • At this point, we must rely on location information combined with other information such as author, title, and version to uniquely identify data used in a study.
  • The key to making registered locators, such as DOIs, ARKS, or Handles, work unambiguously to identify and locate data sets is through careful tracking and documentation of versions.
  • how to handle different data set versions relative to an assigned locator.
  • Track major_version.minor_version.[archive_version].
  • Typically, something that affects the whole data set like a reprocessing would be considered a major version.
  • Assign unique locators to major versions.
  • Old locators for retired versions should be maintained and point to some appropriate web site that explains what happened to the old data if they were not archived.
  • A new major version leads to the creation of a new collection-level metadata record that is distributed to appropriate registries. The older metadata record should remain with a pointer to the new version and with explanation of the status of the older version data.
  • Major and minor version should be listed in the recommended citation.
  • inor versions should be explained in documentation
  • Ongoing additions to an existing time series need not constitute a new version. This is one reason for capturing the date accessed when citing the data.
  • we believe it is currently impossible to fully satisfy the requirement of scientific reproducibility in all situations
  • To aid scientific reproducibility through direct, unambiguous reference to the precise data used in a particular study. (This is the paramount purpose and also the hardest to achieve). To provide fair credit for data creators or authors, data stewards, and other critical people in the data production and curation process. To ensure scientific transparency and reasonable accountability for authors and stewards. To aid in tracking the impact of data set and the associated data center through reference in scientific literature. To help data authors verify how their data are being used. To help future data users identify how others have used the data.
  • The ESIP Preservation and Stewardship cluster has examined these and other current approaches and has found that they are generally compatible and useful, but they do not entirely meet all the purposes of Earth science data citation.
  • In general, data sets should be cited like books.
  • hey need to use the style dictated by their publishers, but by providing an example, data stewards can give users all the important elements that should be included in their citations of data sets
  • Access Date and Time--because data can be dynamic and changeable in ways that are not always reflected in release dates and versions, it is important to indicate when on-line data were accessed.
  • Additionally, it is important to provide a scheme for users to indicate the precise subset of data that were used. This could be the temporal and spatial range of the data, the types of files used, a specific query id, or other ways of describing how the data were subsetted.
Lisa Johnston

Scientific Data Sharing Project - 0 views

  •  
    The Data Sharing Project proposes to further this goal initially in the field of medicine by working to create a raw data sharing program that will serve as a model to other disciplines attempting to make their own way in this arena.
Lisa Johnston

Digital Preservation Courses & Workshops - Digital Preservation Outreach and Education ... - 0 views

  •  
    more online training opportunities...the DPOE program 
Lisa Johnston

NSF to Ask Every Grant Applicant for Data Management Plan - ScienceInsider - 0 views

  •  
    Scientists seeking funding from the National Science Foundation (NSF) will soon need to spell out how they plan to manage the data they hope to collect. It's part of a broader move by NSF and other federal agencies to emphasize the importance of community access to data.
Lisa Johnston

DigitalKoans » Blog Archive » Planets Project Deposits "Digital Genome" Ti... - 0 views

  •  
    Over the last decade the digital age has seen an explosion in the rate of data creation. Estimates from 2009 suggest that over 100 GB of data has already been created for every single individual on the planet ranging from holiday snaps to health records-that's over 1 trillion CDs worth of data, equivalent to 24 tons of books per person!
Amy West

PLoS Computational Biology: Defrosting the Digital Library: Bibliographic Tools for the... - 0 views

  • Presently, the number of abstracts considerably exceeds the number of full-text papers,
  • full papers that are available electronically are likely to be much more widely read and cited
  • Since all of these libraries are available on the Web, increasing numbers of tools for managing digital libraries are also Web-based. They rely on Uniform Resource Identifiers (URIs [25] or “links”) to identify, name, and locate resources such as publications and their authors.
  • ...27 more annotations...
  • We often take URIs for granted, but these humble strings are fundamental to the way the Web works [58] and how libraries can exploit it, so they are a crucial part of the cyberinfrastructure [59] required for e-science on the Web.
  • link to data (the full-text of a given article),
  • To begin with, a user selects a paper, which will have come proximately from one of four sources: 1) searching some digital library, “SEARCH” in Figure 4; 2) browsing some digital library (“BROWSE”); 3) a personal recommendation, word-of-mouth from colleague, etc., (“RECOMMEND”); 4) referred to by reading another paper, and thus cited in its reference list (“READ”)
  • There is no universal method to retrieve a given paper, because there is no single way of identifying publications across all digital libraries on the Web
  • Publication metadata often gets “divorced” from the data it is about, and this forces users to manage each independently, a cumbersome and error-prone process.
  • There is no single way of representing metadata, and without adherence to common standards (which largely already exist, but in a plurality) there never will be.
  • Where DOIs exist, they are supposed to be the definitive URI. This kind of automated disambiguation, of publications and authors, is a common requirement for building better digital libraries
  • Publication metadata are essential for machines and humans in many tasks, not just the disambiguation described above. Despite their importance, metadata can be frustratingly difficult to obtain.
  • So, given an arbitrary URI, there are only two guaranteed options for getting any metadata associated with it. Using http [135], it is possible to for a human (or machine) to do the following.
  • This technique works, but is not particularly robust or scalable because every time the style of a particular Web site changes, the screen-scraper will probably break as well
  • This returns metadata only, not the whole resource. These metadata will not include the author, journal, title, date, etc., of
  • As it stands, it is not possible to perform mundane and seemingly simple tasks such as, “get me all publications that fulfill some criteria and for which I have licensed access as PDF” to save locally, or “get me a specific publication and all those it immediately references”.
  • Having all these different metadata standards would not be a problem if they could easily be converted to and from each other, a process known as “round-tripping”.
  • many of these mappings are non-trivial, e.g., XML to RDF and back again
  • more complex metadata such as the inbound and outbound citations, related articles, and “supplementary” information.
  • Personalization allows users to say this is my library, the sources I am interested in, my collection of references, as well as literature I have authored or co-authored. Socialization allows users to share their personal collections and see who else is reading the same publications, including added information such as related papers with the same keyword (or “tag”) and what notes other people have written about a given publication.
  • CiteULike normalizes bookmarks before adding them to its database, which means it calculates whether each URI bookmarked identifies an identical publication added by another user, with an equivalent URI. This is important for social tagging applications, because part of their value is the ability to see how many people (and who) have bookmarked a given publication. CiteULike also captures another important bibliometric, viz how many users have potentially read a publication, not just cited it.
  • Connotea uses MD5 hashes [157] to store URIs that users bookmark, and normalizes them after adding them to its database, rather than before.
  • he source code for Connotea [159] is available, and there is an API that allows software engineers to build extra functionality around Connnotea, for example the Entity Describer [160].
  • Personalization and socialization of information will increasingly blur the distinction between databases and journals [175], and this is especially true in computational biology where contributions are particularly of a digital nature.
  • This is usually because they are either too “small” or too “big” to fit into journals.
  • As we move in biology from a focus on hypothesis-driven to data-driven science [1],[181],[182], it is increasingly recognized that databases, software models, and instrumentation are the scientific output, rather than the conventional and more discursive descriptions of experiments and their results.
  • In the digital library, these size differences are becoming increasingly meaningless as data, information, and knowledge become more integrated, socialized, personalized, and accessible. Take Postgenomic [183], for example, which aggregates scientific blog posts from a wide variety of sources. These posts can contain commentary on peer-reviewed literature and links into primary database sources. Ultimately, this means that the boundaries between the different types of information and knowledge are continually blurring, and future tools seem likely to continue this trend.
  • he identity of people is a twofold problem because applications need to identify people as users in a system and as authors of publications.
  • Passing valuable data and metadata onto a third party requires that users trust the organization providing the service. For large publishers such as Nature Publishing Group, responsible for Connotea, this is not necessarily a problem.
  • business models may unilaterally change their data model, making the tools for accessing their data backwards incompatible, a common occurrence in bioinformatics.
  • Although the practice of sharing raw data immediately, as with Open Notebook Science [190], is gaining ground, many users are understandably cautious about sharing information online before peer-reviewed publication.
  •  
    Yes, but Alexandria was also a lot smaller; not totally persuaded by analogy here...
Amy West

Fridge-sized tape recorder could crack lunar mysteries - ABC News (Australian Broadcast... - 0 views

  •  
    Usually, I hear about this with socsci data, but the basic issue remains: obsolete and almost unresolvable data loss.
Amy West

OpenGIS Transducer Markup Language (TML) Encoding Specification - 0 views

  •  
    TML defines: * a set of models describing the response characteristics of a transducer * an efficient method for transporting sensor data and preparing it for fusion through spatial and temporal associations
Amy West

Climate of 2007 - 0 views

  •  
    The June-August summer season ended with a long-lasting heatwave that produced more than 2000 new daily high temperature records across the southern and central U.S., according to scientists at NOAA's National Climatic Data Center in Asheville, N.C.
Amy West

National Patterns of R and D Resources - 0 views

  •  
    Describes and analyzes current patterns of research and development (R&D) in the US. In years when the full report is not published, the Division of Science Resources Statistics makes available "data update tables" to provide public access to the most c
Amy West

YOKOFAKUN: A survey of the Proteins in Wikipedia - 0 views

  •  
    interesting project and methodolgy...
Amy West

Water Resources Data - 0 views

  •  
    Home page for USGS water resouces data sets.
Amy West

DataStaR - 0 views

  •  
    ah-hah! Bookmarks can be private and shared to groups!
Amy West

NASA tackles archive data - 0 views

  • DMF ultimately will allow the agency to archive and manage 40 petabytes of information — an amount equal to approximately 2,000 times the size of the entire print collection of the Library of Congress, NASA officials said.
« First ‹ Previous 61 - 80 Next › Last »
Showing 20 items per page