"Initiated by the Library of Congress, BIBFRAME provides a foundation for the future of bibliographic description, both on the web, and in the broader networked world. This site presents general information about the project, including presentations, FAQs, and links to working documents. In addition to being a replacement for MARC, BIBFRAME serves as a general model for expressing and connecting bibliographic data. A major focus of the initiative will be to determine a transition path for the MARC 21 formats while preserving a robust data exchange that has supported resource sharing and cataloging cost savings in recent decades."
LAWA will federate distributed FIRE facilities with the rich Web repository of the European Archive, to create a Virtual Web Observatory and use Web data analytics as a use case study to validate our design. The outcome of our work will enable Internet-scale analysis of data, and bring the content aspect of the Internet on the roadmap of Future Internet Research. In four work packages we will extend the open-source Hadoop software by novel methods for wide-area data access, distributed storage and indexing, scalable data aggregation and data analysis along the time dimension, and automatic classification of Web contents.
On top of our platform we have developed new sophisticated statistical models and machine learning algorithms for a wide range of applications including product targeting, market segmentation, community detection, network security, text analysis, and computer vision. These models enable our customers extract more value from their data and better understand and respond to a rapidly evolving world.
The Intelligence in Wikipedia Project aims to accelerate the extraction of Wikipedia knowledge, e.g. with construction of infoboxes, and link the resulting schemata together to form a knowledge base of outstanding size. Not only will this `semantified Wikipedia' be an even more valuable resource for AI, but it will support Faceted browsing and simple forms of inference that may increase the recall of question-answering systems.
The Knowledge Media Institute (KMi) was set up in 1995 in recognition of the need for the Open University to be at the forefront of research and development in a convergence of areas that impacted on the OU's very nature: Cognitive and Learning Sciences, Artificial Intelligence and Semantic Technologies, and Multimedia. We chose to call this convergence Knowledge Media.
"TEXT ANALYSIS
Two million news articles are published every day. Someone's probably talking about you. We track media in real time, and measure it for emotional bias. "
German Research Center for Artificial Intelligence, with sites in Kaiserslautern, Saarbrücken, Bremen and Berlin, is the leading German research institute in the field of innovative software technology. In the international scientific community, DFKI ranks among the most recognized "Centers of Excellence".
The Rhizomik initiative is inspired by the rhizome metaphor when working with knowledge from a scientific, technological but also philosophical point of view. This metaphor has accompanied us in our research about knowledge in many different fields, fundamentally Semantic Web, Human-Computer Interaction, Web Science, Complex Systems and Cognitive Science.
data.nature.com - the NPG Linked Data Platform
The Linked Data Platform provides access to datasets from NPG published as linked data and made available through SPARQL services. The data are queryable interactively through a form interface and remotely through a service endpoint.
The NeuroCommons project seeks to make all scientific research materials - research articles, knowledge bases, research data, physical materials - as available and as usable as they can be. We do this by fostering practices that render information in a form that promotes uniform access by computational agents - sometimes called "interoperability". We want knowledge sources to combine easily and meaningfully, enabling semantically precise queries that span multiple information sources.
This site is a demonstrator of the Information Workbench, a platform for Linked Data application development. Designed as a self-service platform, the Information Workbench provides you with all the tools and features you need to quickly build your personal Linked Data applications. If you are interested in background information, please visit the Information Workbench Product page.
Kaggle is a platform for data prediction competitions that allows organizations to post their data and have it scrutinized by the world's best data scientists. In exchange for a prize, winning competitors provide the algorithms that beat all other methods of solving a data crunching problem. Most data problems can be framed as a competition.
Mathematica is a computational software program used in scientific, engineering, and mathematical fields and other areas of technical computing. It was conceived by Stephen Wolfram and is developed by Wolfram Research of Champaign, Illinois.[2][3]
KSL conducts research in the areas of knowledge representation and automated reasoning in the Artificial Intelligence Laboratory of the Department of Computer Science at Stanford University. Current work focuses on enabling technology for the Semantic Web, hybrid reasoning, explaining answers from heterogeneous applications, deductive question-answering, representing and reasoning with multiple contexts, knowledge aggregation, ontology engineering, and knowledge-based technology for intelligence analysts and other knowledge workers.
Virtuoso is an innovative enterprise grade multi-model data server for agile enterprises & individuals. It delivers an unrivaled platform agnostic solution for data management, access, and integration.