The share intent is designed to give applications the ability to offer a simple mechanism for sharing data from the current page. An action that can handle share could be anything that the user has installed, including and not limited to social networks and email services.
The "Share" protocol is intended to be a lightweight sharing facility.
OutWit Hub explores the depths of the Web for you, automatically collecting and organizing data and media from online sources. OutWit Hub breaks down Web pages into their different constituents. Navigating from page to page automatically, it extracts information elements and organizes them into usable collections.
The implementation is based on the TERRIER information retrieval framework and supports the following features:
build an inverted index of a Wikipedia Database provided in the original MediaWiki database schema
compute ESA vectors for any text
compute Cosine Similarity of ESA vectors (which can be used as semantic similarity measure)
WikipediaMiner is a toolkit for tapping the rich semantics encoded within Wikipedia. It makes it easy to integrate Wikipedia's knowledge into your own applications, by: providing simplified, object-oriented access to Wikipedia's structure and content.
measuring how terms and concepts in Wikipedia are connected to each other. detecting and disambiguating Wikipedia topics when they are mentioned in documents.
Linked Open ServicesLinked Open Services (LOS) are an approach to exposing services, that is functionalities, on the Web using the same technologies that are associated with Linked Data, in particular HTTP, RDF and SPARQL
Wikicat is the bibliographic catalog used by the Wikicite and WikiTextrose projects. It will be implemented as a Wikidata dataset using a datamodel design based upon IFLA's Functional Requirements for Bibliographic Records: final report (FRBR) [1], the various ISBD standards, the Library of Congress's MARC 21 specification, the Anglo-American Cataloguing Rules' The Logical Structure of the Anglo-American Cataloguing Rules and Resource Description and Access (RDA), and the International Committee for Documentation (CIDOC)'s Conceptual Reference Model (CRM)[2]. The history and inter-relation of these various cataloging standards is described in RDA presentations.
The Semantic Wikipedia would combine the properties of the Semantic Web and Wiki technology. In this enhancement, articles would have properties (or traits), which could be mixed or combined to allow articles to be members of dynamic categories, chosen by user requests. Lists would no longer be just the numerous pre-formatted list articles, but rather, a list could be dynamically created for all articles matching selected search properties.
Wikidata is a proposed wiki-like database for various types of content. This project as proposed here requires significant changes to the software (or possibly completely new software) but has the potential to centrally store and manage data from all Wikimedia projects, and to radically expand the range of content that can be built using wiki principles.
The Ookaboo RDF dump contains metadata for nearly 1,000,000 public domain and Creative Commons images of more than 500,000 precise topics such as places, people and organism classifications linked to DBpedia and Freebase.
Reasoning is the powerful mechanism to draw conclusions from facts. The Semantic Web contains vast amounts of data, which makes it an interesting source to use with one of several available reasoners.