Skip to main content

Home/ Groups/ Semantic-Web-Web3.0
Erwin Karbasi

Integrate disparate data sources with Semantic Web technology - 0 views

  • Figure 3. Architecture diagram for application As Figure 3 shows: Data from the spreadsheet, the Yahoo! web service, and Wikipedia are converted to RDF. The RDF data is sorted, cross-referenced, and converted to XML by a SPARQL engine. An XSLT engine converts the XML to HTML to generate the final report.
  • Figure 3. Architecture diagram for application As Figure 3 shows: Data from the spreadsheet, the Yahoo! web service, and Wikipedia are converted to RDF. The RDF data is sorted, cross-referenced, and converted to XML by a SPARQL engine. An XSLT engine converts the XML to HTML to generate the final report.
Erwin Karbasi

Relational Database and the Semantic Web | Semantic Universe - 0 views

  • Why do we need RDB2RDF?   The need to map relational data to RDF is increasing. With the rise of Linked Data, more and more people want to publish their data on the web following the Linked Data principles and most probably the data is in relational databases. RDF can also be used for data integration. Using a common standard data model, with a standard query language (SPARQL) is very attractive.   The two use cases for RDB2RDF is to publish relational data as RDF on the web or combining a relational data with existing RDF.   Use Case 1: This use case exemplifies the desire of people wanting to join the Semantic Web by publishing their data as RDF and offering a SPARQL endpoint to their database. The next step would be either to create links from their dataset to other RDF datasets on the web, however this is not in the scope of RDB2RDF.   Use Case 2: This use case is oriented to data integration. This can be divided in three sub use cases where we would like to combine our relational data with: ●     Structured data (relational databases, spreadsheets, csv, etc). ●     Existing RDF data on the web (Linked Data) ●     Unstructured data (HTML, PDF, etc).   We assume that the other sources we would like to combine have already been converted to RDF.
Erwin Karbasi

How to publish Linked Data on the Web - 0 views

  • The goal of Linked Data is to enable people to share structured data on the Web as easily as they can share documents today. The term Linked Data was coined by Tim Berners-Lee in his Linked Data Web architecture note. The term refers to a style of publishing and interlinking structured data on the Web. The basic assumption behind Linked Data is that the value and usefulness of data increases the more it is interlinked with other data. In summary, Linked Data is simply about using the Web to create typed links between data from different sources. The basic tenets of Linked Data are to: use the RDF data model to publish structured data on the Web use RDF links to interlink data from different data sources
  • The glue that holds together the traditional document Web is the hypertext links between HTML pages. The glue of the data web is RDF links. An RDF link simply states that one piece of data has some kind of relationship to another piece of data. These relationships can have different types. For instance, an RDF link that connects data about people can state that two people know each other; an RDF link that connects information about a person with information about publications in a bibliographic database might state that a person is the author of a specific paper.
  • In 'Dereferencing HTTP URIs' the W3C Technical Architecture Group (TAG) distinguish between two kinds of resources: information resources and non-information resources (also called 'other resources') . This distinction is quite important in a Linked Data context. All the resources we find on the traditional document Web, such as documents, images, and other media files, are information resources. But many of the things we want to share data about are not: People, physical products, places, proteins, scientific concepts, and so on. As a rule of thumb, all “real-world objects” that exist outside of the Web are non-information resources.
  • ...13 more annotations...
  • Dereferencing HTTP URIs URI Dereferencing is the process of looking up a URI on the Web in order to get information about the referenced resource. The W3C TAG draft finding about Dereferencing HTTP URIs introduced a distinction on how URIs identifying information resources and non-information resources are dereferenced: Information Resources: When a URI identifying an information resource is dereferenced, the server of the URI owner usually generates a new representation, a new snapshot of the information resource's current state, and sends it back to the client using the HTTP response code 200 OK. Non-Information Resources cannot be dereferenced directly. Therefore Web architecture uses a trick to enable URIs identifying non-information resources to be dereferenced: Instead of sending a representation of the resource, the server sends the client the URI of a information resource which describes the non-information resource using the HTTP response code 303 See Other. This is called a 303 redirect. In a second step, the client dereferences this new URI and gets a representation describing the original non-information resource.
  • Content Negotiation HTML browsers usually display RDF representations as raw RDF code, or simply download them as RDF files without displaying them. This is not very helpful to the average user. Therefore, serving a proper HTML representation in addition to the RDF representation of a resource helps humans to figure out what a URI refers to. This can be achieved using an HTTP mechanism called content negotiation. HTTP clients send HTTP headers with each request to indicate what kinds of representation they prefer. Servers can inspect those headers and select an appropriate response. If the headers indicate that the client prefers HTML, then the server can generate an HTML representation. If the client prefers RDF, then the server can generate RDF. Content negotiation for non-information resources is usually implemented in the following way. When a URI identifying a non-information resource is dereferenced, the server sends a 303 redirect to an information resource appropriate for the client. Therefore, a data source often serves three URIs related to each non-information resource, for instance: http://www4.wiwiss.fu-berlin.de/factbook/resource/Russia (URI identifying the non-information resource Russia) http://www4.wiwiss.fu-berlin.de/factbook/data/Russia (information resource with an RDF/XML representation describing Russia) http://www4.wiwiss.fu-berlin.de/factbook/page/Russia (information resource with an HTML representation describing Russia)
  • The picture below shows how dereferencing a HTTP URI identifying a non-information resource plays together with content negotiation: The client performs an HTTP GET request on a URI identifying a non-information resource. In our case a vocabulary URI. If the client is a Linked Data browser and would prefer an RDF/XML representation of the resource, it sends an Accept: application/rdf+xml header along with the request. HTML browsers would send an Accept: text/html header instead. The server recognizes the URI to identify a non-information resource. As the server can not return a representation of this resource, it answers using the HTTP 303 See Other response code and sends the client the URI of an information resource describing the non-information resource. In the RDF case: RDF content location. The client now asks the server to GET a representation of this information resource, requesting again application/rdf+xml. The server sends the client a RDF/XML document containing a description of the original resource vocabulary URI.
  • How to set RDF Links to other Data Sources RDF links enable Linked Data browsers and crawlers to navigate between data sources and to discover additional data. The application domain will determine which RDF properties are used as predicates. For instance, commonly used linking properties in the domain of describing people are foaf:knows, foaf:based_near and foaf:topic_interest . Examples of combining these properties with property values from DBpedia, the DBLP bibliography and the RDF Book Mashup are found in Tim Berners-Lee's and Ivan Herman's FOAF profiles. It is common practice to use the owl:sameAs property for stating that another data source also provides information about a specific non-information resource. An owl:sameAs link indicates that two URI references actually refer to the same thing. Therefore, owl:sameAs is used to map between different URI aliases (see Section 2.1). Examples of using owl:sameAs to indicate that two URIs talk about the same thing are again Tim's FOAF profile which states that http://www.w3.org/People/Berners-Lee/card#i identifies the same resource as http://www4.wiwiss.fu-berlin.de/bookmashup/persons/Tim+Berners-Lee and http://www4.wiwiss.fu-berlin.de/dblp/resource/person/100007. Other usage examples are found in DBpedia and the Berlin DBLP server. RDF links can be set manually, which is usually the case for FOAF profiles, or they can be generated by automated linking algorithms. This approach is usually taken to interlink large datasets.
  • Recipes for Serving Information as Linked Data This chapter provides practical recipes for publishing different types of information as Linked Data on the Web. Information has to fulfill the following minimal requirements to be considered "published as Linked Data on the Web": Things must be identified with dereferenceable HTTP URIs. If such a URI is dereferenced asking for the MIME-type application/rdf+xml, a data source must return an RDF/XML description of the identified resource. URIs that identify non-information resources must be set up in one of these ways: Either the data source must return an HTTP response containing an HTTP 303 redirect to an information resource describing the non-information resource, as discussed earlier in this document. Or the URI for the non-information resource must be formed by taking the URI of the related information resource and appending a fragment identifier (e.g. #foo), as discussed in Recipe 7.1. Besides RDF links to resources within the same data source, RDF descriptions should also contain RDF links to resources provided by other data sources, so that clients can navigate the Web of Data as a whole by following RDF links. Which of the following recipes fits your needs depends on various factors, such as: How much data do you want to serve? If you only want to publish several hundred RDF triples, you might want to serve them as a static RDF file using Recipe 7.1. If your dataset is larger, you might want to load it into a proper RDF store and put the Pubby Linked Data interface in front of it as described in Recipe 7.3. How is your data currently stored? If your information is stored in a relational database, you can use D2R Server as described in Recipe 7.2. If the information is available through an API, you might implement a wrapper around this API as described in Recipe 7.4. If your information is represented in some other format such as Microsoft Excel, CSV or BibTeX, you will have to convert it to RDF first as described in Recipe 7.3. How often does your data change? If your data changes frequently, you might prefer approaches which generate RDF views on your data, such as D2R Server (Recipe 7.2), or wrappers (Recipe 7.4).
  • After you have published your information as Linked Data, you should ensure that there are external RDF links pointing at URIs from your dataset, so that RDF browser and crawlers can find your data. There are two basic ways of doing this: Add several RDF links to your FOAF profile that point at URIs identifying central resources within your dataset. Assuming that somebody else in the world knows you and references your FOAF profile, your new dataset is now reachable by following RDF links. Convince the owners of related data sources to auto-generate RDF links to URIs from your dataset. Or to make it easier for the owner of the other dataset, create the RDF links yourself and send them to her so that she just has to merge them with her dataset. A project that is extremely open to setting RDF links to other data sources is the DBpedia community project. Just announce your data source on the DBpedia mailing list or send a set of RDF links to the list.
  • Serving Static RDF Files The simplest way to serve Linked Data is to produce static RDF files, and upload them to a web server. This approach is typically chosen in situations where the RDF files are created manually, e.g. when publishing personal FOAF files or RDF vocabularies or the RDF files are generated or exported by some piece of software that only outputs to files.
  • Serving Relational Databases If your data is stored in a relational database it is usually a good idea to leave it there and just publish a Linked Data view on your existing database. A tool for serving Linked Data views on relational databases is D2R Server. D2R server relies on a declarative mapping between the schemata of the database and the target RDF terms. Based on this mapping, D2R Server serves a Linked Data view on your database and provides a SPARQL endpoint for the database.
  • Alternatively, you can also use: OpenLink Virtuoso to publish your relational database as Linked Data. Virtuoso RDF Views – Getting Started Guide on how to map your relational database to RDF and Deploying Linked Data on how to get URI dereferencing and content negotiation into place. Triplify, a small plugin for Web applications, which reveals the semantic structures encoded in relational databases by making database content available as RDF, JSON or Linked Data.
  • Serving other Types of Information If your information is currently represented in formats such as CSV, Microsoft Excel, or BibTEX and you want to serve the information as Linked Data on the Web it is usually a good idea to do the following: Convert your data into RDF using an RDFizing tool. There are two locations where such tools are listed: ConverterToRdf maintained in the ESW Wiki, and RDFizers maintained by the SIMILE team. After conversion, store your data in a RDF repository. A list of RDF repositories is maintained in the ESW Wiki. Ideally the chosen RDF repository should come with a Linked Data interface which takes care of making your data Web accessible. As many RDF repositories have not implemented Linked Data interfaces yet, you can also choose a repository that provides a SPARQL endpoint and put Pubby as a Linked Data interface in front of your SPARQL endpoint. The approach described above is taken by the DBpedia project, among others. The project uses PHP scripts to extract structured data from Wikipedia pages. This data is then converted to RDF and stored in a OpenLink Virtuoso repository which provides a SPARQL endpoint. In order to get a Linked Data view, Pubby is put in front of the SPARQL endpoint. If your dataset is sufficiently small to fit completely into the web server's main memory, then you can do without the RDF repository, and instead use Pubby's conf:loadRDF option to load the RDF data from an RDF file directly into Pubby. This might be simpler, but unlike a real RDF repository, Pubby will keep everything in main memory and doesn't offer a SPARQL endpoint.
  • Implementing Wrappers around existing Applications or Web APIs Large numbers of Web applications have started to make their data available on the Web through Web APIs. Examples of data sources providing such APIs include eBay, Amazon, Yahoo, Google and Google Base. An more comprehensive API list is found at Programmable Web. Different APIs provide diverse query and retrieval interfaces and return results using a number of different formats such as XML, JSON or ATOM. This leads to three general limitations of Web APIs: their content can not be crawled by search engines Web APIs can not be accessed using generic data browsers Mashups are implemented against a fixed number of data sources and can not take advantage of new data sources that appear on the Web. These limitations can be overcome by implementing Linked Data wrappers around APIs. In general, Linked Data wrappers do the following: They assign HTTP URIs to the non-information resources about which the API provides data. When one of these URIs is dereferenced asking for application/rdf+xml, the wrapper rewrites the client's request into a request against the underlying API. The results of the API request are transformed to RDF and sent back to the client.
  • Virtuoso Sponger Virtuoso Sponger is a framework for developing Linked Data wrappers (called cartridges) around different types of data sources. Data sources can range from HTML pages containing structured data to Web APIs. See Injecting Facebook Data into the Semantic Data Web for a demo on how Sponger is used to generate a Linked Data view on Facebook.
  • Discovering Linked Data on the Web The standard way of discovering Linked Data on the Web is by following RDF Links within data the client already knows. In order to further ease discovery, information providers can decide to support additional discovery mechanisms: Ping the Semantic Web Ping the Semantic Web is a registry service for RDF documents on the Web, which is used by several other services and client applications. Therefore, you can improve the discoverability of your data by registering your URIs with Ping The Semantic Web. HTML Link Auto-Discovery It also makes sense in many cases to set links from existing webpages to RDF data, for instance from your personal home page to your FOAF profile. Such links can be set using the HTML <link> element in the <head> of your HTML page. <link rel="alternate" type="application/rdf+xml" href="link_to_the_RDF_version" /> HTML <link> elements are used by browser extensions, like Piggybank and Semantic Radar, to discover RDF data on the Web. Semantic Web Crawling: a Sitemap Extension Semantic Web Crawling: a Sitemap Extension. The sitemap extension allows Data publishers can state where RDF is located and which alternative means are provided to access it (Linked Data, SPARQL endpoint, RDF dump). Semantic Web clients and Semantic Web crawlers can use this information to access RDF data in the most efficient way for the task they have to perform.   Dataset List on the ESW Wiki In order to make it easy not only for machines but also for humans to discover your data, you should add your dataset to the Dataset List on the ESW Wiki. Please include some example URIs of interesting resources from your dataset, so that people have starting points for browsing.
Erwin Karbasi

The ultimate mashup -- Web services and the semantic Web, Part 4: Create an ontology - 0 views

  • In this tutorial The purpose of this tutorial series is to create a mashup application so smart that users can literally add and remove services at will, and the system will know what to do with them. The series progresses as follows: Part 1: You learn about the concept of mashups and how they work. You then build a simple version of one and also discover serious performance problems involved in making potentially dozens of Web calls. Part 2: You solve some of that problem by using DB2's new pureXML capabilities to build an XML cache, which saves the results of previous requests and also enables you to retrieve specific information.Parts 3, 4, and 5: Ultimately, you will need to use ontologies, or vocabularies that define concepts and their relationships, so in Part 3 you started that process by learning about RDF and RDFs, two key ingredients in the Web Ontology Language (OWL), which is discussed here in Part 4. In Part 5, you will take the ontologies created in Part 4 and use them to enable users to change out information sources.Part 6: At this point, you have a working application and the framework in place so that the system can use semantic reasoning to understand the services at its disposal. In this part, you give the user control, enabling him or her to pick and choose the data that is used for a custom mashup.
Erwin Karbasi

linked data extractor prototype details - webr3.org - 0 views

  •  
    OpenCalais & Zemanta Comparison
Erwin Karbasi

Puzzlepieces - Comparing NLP APIs for Entity Extraction (January 2, 2010) - 0 views

  • The APIs I tested were, roughly in order of increasing quality, Yahoo: Term Extraction OpenCalais: API BeliefNetworks: Recommend Concepts OpenAmplify: API AlchemyAPI: Named Entity Extraction Evri: REST API: Get entity network about some text
  •  
    Comparison entity extraction
Erwin Karbasi

Experiences developing a simple Linked Data based application « SourceForge.n... - 0 views

  • The Idea After being urged by Juan, the main organizer of the Linked Data-a-thon, we seriously thought about participating. Coming up with an idea for a cool application that could also be developed in only a few hours turned out to be a minor challenge. However, we remembered one of the demo queries of our SQUIN service. This query asks for traditional Chinese medicine as an alternative to the western drug Varenicline. Answering this queries requires data from at least three different linked datasets provided by the Linking Open Drug Data (LODD) project. Hence, this query demonstrates the added value of interlinking data from multiple sources on the Web. Furthermore, the query gives a glimpse of how ordinary people may benefit from openly available data. For this reason we agreed to build a simple application around this query and let users vary the drug for which the alternatives are to be found. Since answering only one type of questions is quite boring we decided to add some functionality that enable users to drill a bit deeper into the data that is available. Based on our knowledge of the LODD datasets we came up with the idea of adding value to the search results by allowing users to inspect possible side effects of the alternative medicines. This valuable functionality would require the evaluation of data from at least two additional datasets and, by using SQUIN, it would be realizable with only one additional type of SPARQL queries.
Erwin Karbasi

linked-data-api - Linked Data API Specification - Project Hosting on Google Code - 0 views

  • This document defines a vocabulary and processing model for a configurable API layer intended to support the creation of simple RESTful APIs over RDF triple stores. The API layer is intended to be deployed as a proxy in front of a SPARQL endpoint to support: Generation of documents (information resources) for the publishing of Linked Data Provision of sophisticated querying and data extraction features, without the need for end-users to write SPARQL queries Delivery of multiple output formats from these APIs, including a simple serialisation of RDF in JSON syntax
Erwin Karbasi

Meshing Information and Mashing Functionality | Mashstream - 0 views

  • Linked Data employs basic identification of content to loosely mesh and deliver shared semantic content of raw, personalized data from disparate sources. Likewise, Mashups employ semantic tagging but require logic through coding or high-level scripting using open APIs to provide functionality from disparate cloud services melded with proprietary data to render a new knowledge set or services. While both hold great promise for Web 2.0 moving to Web 3.0 strategies, each has significant differences in implementation and usage. Both technologies remain in their infancy to date but they are growing up fast.
  • Linked Data employs simple HTTP protocols to connect exposed data stores and stream unedited content quickly to interested readers before publishing as contextual knowledge. As a subset of the Semantic Web, Linked Data relies on semantic markup to define the meaning of content and then employing dereferenceable URIs (semantics) to locate and deliver Web content as URL addresses. It is Tim Berners-Lee’s vision of the Web as a universal data, information, and knowledge exchange.
  • As Kingsley Idehen playfully puts it, “Linked Data basically delivers the ability to Mesh disparate data sources rather that settling for brute-force data mashing as exemplified by Mashups.”
  • ...1 more annotation...
  • Scientific research needs Linked Data now.  I was offered to apply to position as a writer for a leading-edge nanotechnology and DNA research company. They wanted someone to handle content to share between scientists both internally and externally. My input as an experienced information developer was to establish an ontology of common semantics to automate content markup based on semantic to facilitate the interaction between research teams and provide real-time data. They didn’t need to throw this data over the wall to a writer to publish on a web site or through formal papers. They are getting back to me. Example: http://bio2rdf.org/–Semantic web atlas of post-genomic knowledge. Shotgun communication. Companies wrestle with all of their diverse content when releasing products and service from the different R&D, technical support, marketing, and writing teams in individual silos. All of this overlapping information needs to be brought together to identify cost savings and get the right information in front of the various types of customers (prospective customers, customers needing best practices, upsell customers, internal employees, et al). Example: See my posting on this issue and an example of shotgun communication. Low-hanging opportunities. Some mashup applications are easier to implement than others. These high-value, low-cost implementations need to be taken to the market to build on for more complex and productive products. Example: Housingmaps.com.
Erwin Karbasi

About Mashups and Linked Data | Mashstream - 0 views

  • What is a Mashup? First of all, we need to agree on a basic definition of a mashup. There are many types of mashups—commercial, enterprise, data, and knowledge mashups to name a few categories. But basically, all types of mashups consist of two or more unrelated sources to create a new entity as a product, service, e-book, or online application. Mashups capture data and online applications from various points of origin and combine them to create a new functional entity specific to the needs of each corporation, reader, and customer. Mashups employ open APIs to create functional services that capture exposed data and application features from existing Web content and protocols to form a new type of online application.
  • What is the Semantic Web? The Semantic Web identifies content on the Web based on its meaning, providing intelligence from various data sources  or actionable features from online Web applications. This is in contrast to “syntactic” markup, which only identifies the look and layout of content. Tim Berners-Lee’s, the inventor of the World Wide Web, is one of many champions of semantic web interaction as a consistent medium for the exchange of data, information, and knowledge (see the Information Confluence for the distinction). The Semantic Web relies on the markup of separate information fields to be understood by computers in order perform much of the perfunctory background operations involved in finding, sharing, and combining information from the Web to render as usable knowledge. The Semantic Web continues to fall short of many of its highest ambitions to date, but is still supported by Tim Berners-Lee and World Wide Web Consortium (WWWC).  Criticism about its practical feasibility, lagging progress, security and privacy, and need for additional markup has yet to be resolved. Regardless, many still see the semantic content as Web 3.0, the next evolution Web. It promises to provide intelligence and interaction for semantic publishing in scientific research, exposing experimental data as structured information for real-time sharing of information by researchers. The Semantic Web is the only strategy proffered at this time to furnish intelligence and context across disparate Web information systems. Structured information based on context and meaning of information seems to be the only way to control Web data as it grows exponentially. Semantic publishing promises application interoperability and efficient, automated data integration. One of the more realistic components of the Semantic Web is Linked Data, an environment that sees all entities on the Web as individual objects. These objects can then be intelligently combined and repurposed to create a new service, product, or knowledge set.
  • What is Linked Data? Linked data simplifies much of the complexity of the Semantic Web. In contrast to the full Semantic Web, linked data publishes structured data using URI addresses rather than relying on a hierarchical (called ontological) cascading of parent/child relationships established by semantic markup. Using URIs, linked data handles everything on the Web as an object to be formed and presented as knowledge or actuated as new services and products. The following diagram represents the standard concept behind Linked Data. Disparate data sources from data stores, documents, Web sites, and other cloud and internal data repositories come together organically to grow a new, fully-functional knowledge set, service, or product. The main goals and concepts behind linked data includes these four principles: Use URIs to identify web objects. Use HTTP URIs to refer to and searched by readers and user agents from other applications. Provide useful information (e.g., a structured description or metadata) when the URI is referenced. Links to related URIs to expose data to improve searches of related information on the Web.
  • ...1 more annotation...
  • Linked Data is about using HTTP referencable Names (generic HTTP scheme URIs) for Data Objects / Data Items, that de-reference to HTTP accessible Data Sources (via Locator oriented HTTP scheme URIs commonly known as URLs). There is a duality to the Generic HTTP scheme URI that enables Identity (Name)/ Access (Address) to exist in a single unit. Its this duality that enables the Linked Data magic whereby de-referencing a Data Objects Names results in access to the a structured data representation of its description (Metadata – a constellation of data relations that describe said Data Object). Bearing in mind the above, Linked Data basically delivers the ability to Mesh disparate data sources rather that settling for brute-force data mashing as exemplified by Mashups Enterprises have to look to work with disparate data sources in the the same way they work with data, hosted in a single DBMS from a single vendor i.e., they should look to JOIN structured Data en route to constructing holistic views over their data silos etc.. Of course, the same applies to the public Web since it too is proliferated with Data Silos courtesy of Web 2.0.
Erwin Karbasi

The web as a CMS - 0 views

  • In exchange, MusicBrainz receives a monthly license fee that will allow MetaBrainz to hire some engineering help in the coming months to work on new features and to improve the existing infrastructure. This is quite significant since MusicBrainz has been resource constrained for many months now — having paid people on staff will ensure a more reasonable amount of progress moving forward. Even cooler, the BBC online music editors will soon participate in the MusicBrainz community contributing their knowledge to MusicBrainz. The goal is to have the BBC /music editorial team round out and add new information to MusicBrainz as they need to use it in their MusicBrainz enabled applications internally.
Erwin Karbasi

Georgi Kobilarov » The web as a CMS for data - 0 views

  • The distributed published nature of open linked data is vital to the health of the datasets. It means that each dataset is curated by a community that cares specifically about that data. However, this argument of distributed publishing and curation is in my opinion not only valid in the public space, but also for data and content sources within enterprises, which with their many departments and teams form webs of their own. The scientist running an experiment for example is who knows best about the resulting data, so let her curate it. The sales department has the best understanding of their sales data, the product development team of their knowledge base. In the end this is all similar to the discussion about open data (as in data on the Web) and linked data (as in a technology for data webs). They are complementary, but each of them has its own and independent value proposition.
Erwin Karbasi

With a Web of Data, what would you do? - 0 views

  • Here’s my idea: If we had a Web of Data, I would built an application for painless travel planning. It would integrate flight plans, train timetables, bus routes, car rental offers, etc. And the user would be able to just say: I want to go from A to B: Find me the best/cheapest/fastest routes. Because today travel planning is a pain: often your destination city has no airport, so you have to choose one of multiple airports of nearby cities. And the optimal choice for that depends on train connections from those airports to your destination city. And the choice of a train station depends on bus connections. And so on. With a Web of Data, an application could do all that combining for me, the same way flight booking sites do that today for just flights.
  •  
    What is Web of Data
Erwin Karbasi

What would you build with a web of data? Decision support - 0 views

  • The web of data can be designed in a way, that it collects experiences (also decision relevant measurements of machines) in a precise and *comparable* way (much more precise and better comparable than text). So the web of data can summarize experiences in well defined comparable way for decision support. For this a clear similarity relation is necessary. The natural way to do this is a vectorial description of resources, i.e. quantification of the resource's properties and regarding the result (a sequence of numbers) as vector. After defining an appropriate metric (distance function) we can calculate similarity of vectors by calculating the distance between them - the less the distance, the more similar are the vectors and (in case of good quantification) the original resources. Using HTTP URIs allows that all domain name owners can define these vectors and optimized distance functions.
  •  
    What is Linked Data
Erwin Karbasi

Where is the business value in Linked Data? - Tom Heath's Displacement Activities - 0 views

  • In contrast, Linked Data mashups (or “meshups” as they sometimes get called) are simply statements linking items in related data sets. Crucially these items are identified by URIs starting “http://”, each of which may have been minted in the domain of the data publisher, meaning that whenever anyone looks up one of these URIs they may be channeled back to the original data source. It is this feature that creates the business value in Linked Data compared to conventional Web APIs. Rather than releasing data into the cloud untethered and untraceable, Linked Data allows organisations and individuals to expose their data assets in a way that is easily consumed by others, whilst retaining indicators of provenance and a means to capitalise on or otherwise benefit from their commitment to openness. Minting URIs to identify the entities in your data, and linking these to related items in other data sets presents an opportunity to channel traffic back to conventional Web sites when someone looks up those URIs. It is this process that presents opportunities to generate business value through Linked Data principles.
  •  
    Linked Data Explanation.
Erwin Karbasi

Querying Linked Data with SPARQL (2010) - 0 views

  •  
    SQUIN - Semantic Web Query Interface - Slide
Erwin Karbasi

Semantic Web Client Library - 0 views

  • The Sematic Web Client Library represents the complete Semantic Web as a single RDF graph. The library enables applications to query this global graph using SPARQL- and find(SPO) queries. To answer queries, the library dynamically retrieves information from the Semantic Web by dereferencing HTTP URIs, by following rdfs:seeAlso links, and by querying the Sindice search engine. The library is written in Java and is based on the Jena framework.
1 - 20 of 57 Next › Last »
Showing 20 items per page