Skip to main content

Home/ sensemaking/ Group items tagged data

Rss Feed Group items tagged

Jack Park

Sluijs - 0 views

  •  
    The present research analyses the 'social visualization' tool Sense.us, a commercial interactive Web application in which U.S. Census data are visualized. Sense.us was developed as a tool for social data exploration and interaction, in which it would be worthwhile to pay attention to the socio-cultural values that have driven the collection and categorization of the underlying U.S. Census datasets. It is argued that closer attention to value driven U.S. Census statistics would greatly enhance the social appeal of Sense.us, and would be a logical next step in the development of online social visualization tools. In order to allow for explicit socio-cultural values of statistics in online visualizations, three strategies are offered: pro-active annotation; more attention to visual aesthetics; and, a tighter integration of user profiles and represented data.
Jack Park

Yahoo! GeoPlanet - YDN - 0 views

  •  
    Yahoo! GeoPlanet helps bridge the gap between the real and virtual worlds by providing an open, permanent, and intelligent infrastructure for geo-referencing data on the Internet. This page provides open access to the underlying data under a Creative Commons Attribution license so that you can incorporate WOEIDs and the GeoPlanet hierarchy into your own applications.
Jack Park

Semantics Incorporated: Tying Web 3.0, the Semantic Web and Linked Data Together --- Pa... - 0 views

  •  
    I hope that "Smarter" is going to be a key tag for the Web 3.0, and yet I think "More Open, More Ubiquitous, with even More Information (Overload) and a little Smarter" is what it's really going to be. We'll have to wait till "Web 4.0" for a web that really is stepwise more intelligent, one that could really be called semantic and hold the hidden promises of a "Semantic Web". And the reason I believe this is that the community is focused on linking more stuff together in new ways and breaking down data siloes, much more than it is focused on creating new, smarter filters for all the data that's going to be made accessible that way.
Jack Park

DataPortability.org - Share and remix data using open standards - 0 views

  •  
    Data portability is the ability for people to reuse their data across interoperable applications. The DataPortability Project works to advance this vision by identifying, contextualizing and promoting efforts in the space.
Jack Park

Sindice - The semantic web index - 0 views

  •  
    Over 10 billion pieces of reusable information can already be found across 100 million web pages which embed RDF and Microformats. Start consuming this data today with Sindice Data Web services.
Jack Park

Open Data Commons - 0 views

  •  
    Open Data Commons is the home of a set of legal 'tools' to help you provide and use open data.
Jack Park

Glasshouse injects 3D representation of data into a virtual world | CyberTech News - 0 views

  •  
    Glasshouse by Green Phosphor is a gateway which can take a database query or a spreadsheet and place a 3D representation of it into a virtual world. Users can see data, and drill into it; re-sort it; explore it interactively - all from within a virtual world. Glasshouse produces graphs which are avatars of the data itself.
Jack Park

SourceForge.net: RapidMiner (YALE) -- Java Data Mining - 0 views

  •  
    (aka YALE) data mining, machine learning, knowledge discovery, business intelligence in Java. 400+ operators: data mining incl. Weka,learning,preprocessing,validation,visualization. GUI,API,XML,analysis,knowledge discovery,databases,business intelligenc
Jack Park

x2exp.pdf (application/pdf Object) - 0 views

  •  
    But invariably, simple models and a lot of data trump more elaborate models based on less data."
Jack Park

DallasWorkshop - NCBO Wiki - 0 views

  •  
    The aims of clinical and translational research are to achieve a better understanding of the pathogenesis of human disease in order to develop effective diagnostic, therapeutic and prevention strategies. Biomedical informatics can play an important role in supporting this research by facilitating the management, integration, analysis and exchange of data derived from and related to the research problems being studied. A key aspect of this support is to bring clarity, rigor and formalism to the representation of 1. disease initiation, progression, pathogenesis, signs, symptoms, assessments, clinical and laboratory findings, disease diagnosis, treatment, treatment response and outcome, and 2. the interrelations between these distinct entities both in patient management and in clinical research, thus allowing the data to be more readily retrievable and shareable, and more able to serve in the support of algorithmic reasoning.
Jack Park

The National Center for Biomedical Ontology - 0 views

  •  
    The National Center for Biomedical Ontology is a consortium of leading biologists, clinicians, informaticians, and ontologists who develop innovative technology and methods allowing scientists to create, disseminate, and manage biomedical information and knowledge in machine-processable form. Our visionis that all biomedical knowledge and data are disseminated on the Internet using principled ontologies, such that they are semantically interoperable and useful for improving biomedical science and clinical care. Our resources include the Open Biomedical Ontologies (OBO) library, the Open Biomedical Data (OBD) repositories, and tools for accessing and using this information in research. The Center collaborates with biomedical researchers conducting Driving Biological Projects to enable outside research and stimulate technology development in the Center. The Center undertakes outreach and educational activities (Biomedical Informatics Program) to train future researchers to use biomedical ontologies and related tools with the goal of enhancing scientific discovery.
Jack Park

triplify.org : About - 0 views

  •  
    Triplify is based on the definition of relational database queries for a specific Web application in order to retrieve valuable information and to convert the results of these queries into RDF, JSON and Linked Data. Experiences showed that for most web-applications a relatively small number of queries (mostly between 3-7) is sufficient to extract the important information. After generating such database views the Triplify software can be used to convert the view into an RDF, JSON or Linked Data representation, which can be shared and accessed on the (Semantic) Web.
Jack Park

Main Page - NeuroCommons - 0 views

  •  
    The NeuroCommons project seeks to make all scientific research materials - research articles, annotations, data, physical materials - as available and as useable as they can be. We do this by both fostering practices that render information in a form that promotes uniform access by computational agents - sometimes called "interoperability". We want knowledge sources to combine meaningfully, enabling semantically precise queries that span multiple information sources. Our work covers general data and knowledge sources used in computational biology as well as sources specific to neuroscience and neuromedicine. The practices that we develop and promote are designed to play well on the Semantic Web. We view our technical work not as creating a new service or content library, although we do both, but rather as helping to promote the growth of semantically linked scientific information.
Jack Park

A Prototype Knowledge Base for the Life Sciences - 0 views

  •  
    The prototype we describe is a biomedical knowledge base, constructed for a demonstration at Banff WWW2007 , that integrates 15 distinct data sources using currently available Semantic Web technologies such as the W3C standard Web Ontology Language [OWL] and Resource Description Framework [RDF]. This report outlines which resources were integrated, how the knowledge base was constructed using free and open source triple store technology, how it can be queried using the W3C Recommended RDF query language SPARQL [SPARQL], and what resources and inferences are involved in answering complex queries. While the utility of the knowledge base is illustrated by identifying a set of genes involved in Alzheimer's Disease, the approach described here can be applied to any use case that integrates data from multiple domains.
Jack Park

Apache UIMA - Apache UIMA - 0 views

  •  
    Unstructured Information Management applications are software systems that analyze large volumes of unstructured information in order to discover knowledge that is relevant to an end user. UIMA is a framework and SDK for developing such applications. An example UIM application might ingest plain text and identify entities, such as persons, places, organizations; or relations, such as works-for or located-at. UIMA enables such an application to be decomposed into components, for example "language identification" -> "language specific segmentation" -> "sentence boundary detection" -> "entity detection (person/place names etc.)". Each component must implement interfaces defined by the framework and must provide self-describing metadata via XML descriptor files. The framework manages these components and the data flow between them. Components are written in Java or C++; the data that flows between components is designed for efficient mapping between these languages. UIMA additionally provides capabilities to wrap components as network services, and can scale to very large volumes by replicating processing pipelines over a cluster of networked nodes.
Jack Park

Home - 0 views

  •  
    Inspired by Yahoo's Pipes, DERI Web Data Pipes implement a generalization which can also deal with formats such as RDF (RDFa), Microformats and generic XML. DERI Pipes are Open Source Software, ad as such they can be easily extended and applyed in use cases where a local deployment is needed. DERI Pipes provides a rich web GUI where pipes can be graphically edited, debugged and invoked. The execution engine is also available as a standalone JAR, which is ideal for embedded use. DERI Pipes, in general, produce as an output streams of data (e.g. XML, RDF,JSON) that can be used by applications. However, when invoked by a normal browser, they will provide a end user GUI for the user to enter parameter values and browse the results
Jack Park

wiki.dbpedia.org : About - 0 views

  •  
    DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to make sophisticated queries against Wikipedia, and to link other data sets on the Web to Wikipedia data.
Jack Park

Sphinx - Free open-source SQL full-text search engine - 0 views

  •  
    Sphinx is a full-text search engine, distributed under GPL version 2. Commercial license is also available for embedded use. Generally, it's a standalone search engine, meant to provide fast, size-efficient and relevant fulltext search functions to other applications. Sphinx was specially designed to integrate well with SQL databases and scripting languages. Currently built-in data sources support fetching data either via direct connection to MySQL or PostgreSQL, or using XML pipe mechanism (a pipe to indexer in special XML-based format which Sphinx recognizes).
Jack Park

Ontomat Homepage - Annotation Portal - 0 views

  •  
    OntoMat-Annotizer is a user-friendly interactive webpage annotation tool. It supports the user with the task of creating and maintaining ontology-based OWL-markups i.e. creating of OWL-instances, attributes and relationships. It include an ontology browser for the exploration of the ontology and instances and a HTML browser that will display the annotated parts of the text. It is Java-based and provide a plugin interface for extensions. The intended user is the individual annotator i.e., people that want to enrich their web pages with OWL-meta data. Instead of manually annotating the page with a text editor, say, emacs, OntoMat allows the annotator to highlight relevant parts of the web page and create new instances via drag?n?drop interactions. It supports the meta-data creation phase of the lifecycle. It is planned that a future version will contain an information extraction plugin, that offers a wizard which suggest which parts of the text are relevant for annotation. That aspect will help to ease the time-consuming annotation task.
Jack Park

x-media project home - 0 views

  •  
    X-Media addresses the issue of knowledge management in complex distributed environments. It will study, develop and implement large scale methodologies and techniques for knowledge management able to support sharing and reuse of knowledge that is distributed in different media (images, documents and data) and repositories (data bases, knowledge bases, document repositories, etc.). The project started in March 2006 and will last for 4 years. It has a budget topping Euro 13.6M, (9.9M from the EU). 15 partners are involved from UK, Germany, Italy, France, Slovenia, Greece and Norway.
‹ Previous 21 - 40 of 167 Next › Last »
Showing 20 items per page