Skip to main content

Home/ Groups/ Semantic-Web-Web3.0
4More

Meshing Information and Mashing Functionality | Mashstream - 0 views

  • Linked Data employs basic identification of content to loosely mesh and deliver shared semantic content of raw, personalized data from disparate sources. Likewise, Mashups employ semantic tagging but require logic through coding or high-level scripting using open APIs to provide functionality from disparate cloud services melded with proprietary data to render a new knowledge set or services. While both hold great promise for Web 2.0 moving to Web 3.0 strategies, each has significant differences in implementation and usage. Both technologies remain in their infancy to date but they are growing up fast.
  • Linked Data employs simple HTTP protocols to connect exposed data stores and stream unedited content quickly to interested readers before publishing as contextual knowledge. As a subset of the Semantic Web, Linked Data relies on semantic markup to define the meaning of content and then employing dereferenceable URIs (semantics) to locate and deliver Web content as URL addresses. It is Tim Berners-Lee’s vision of the Web as a universal data, information, and knowledge exchange.
  • As Kingsley Idehen playfully puts it, “Linked Data basically delivers the ability to Mesh disparate data sources rather that settling for brute-force data mashing as exemplified by Mashups.”
  • ...1 more annotation...
  • Scientific research needs Linked Data now.  I was offered to apply to position as a writer for a leading-edge nanotechnology and DNA research company. They wanted someone to handle content to share between scientists both internally and externally. My input as an experienced information developer was to establish an ontology of common semantics to automate content markup based on semantic to facilitate the interaction between research teams and provide real-time data. They didn’t need to throw this data over the wall to a writer to publish on a web site or through formal papers. They are getting back to me. Example: http://bio2rdf.org/–Semantic web atlas of post-genomic knowledge. Shotgun communication. Companies wrestle with all of their diverse content when releasing products and service from the different R&D, technical support, marketing, and writing teams in individual silos. All of this overlapping information needs to be brought together to identify cost savings and get the right information in front of the various types of customers (prospective customers, customers needing best practices, upsell customers, internal employees, et al). Example: See my posting on this issue and an example of shotgun communication. Low-hanging opportunities. Some mashup applications are easier to implement than others. These high-value, low-cost implementations need to be taken to the market to build on for more complex and productive products. Example: Housingmaps.com.
4More

About Mashups and Linked Data | Mashstream - 0 views

  • What is a Mashup? First of all, we need to agree on a basic definition of a mashup. There are many types of mashups—commercial, enterprise, data, and knowledge mashups to name a few categories. But basically, all types of mashups consist of two or more unrelated sources to create a new entity as a product, service, e-book, or online application. Mashups capture data and online applications from various points of origin and combine them to create a new functional entity specific to the needs of each corporation, reader, and customer. Mashups employ open APIs to create functional services that capture exposed data and application features from existing Web content and protocols to form a new type of online application.
  • What is the Semantic Web? The Semantic Web identifies content on the Web based on its meaning, providing intelligence from various data sources  or actionable features from online Web applications. This is in contrast to “syntactic” markup, which only identifies the look and layout of content. Tim Berners-Lee’s, the inventor of the World Wide Web, is one of many champions of semantic web interaction as a consistent medium for the exchange of data, information, and knowledge (see the Information Confluence for the distinction). The Semantic Web relies on the markup of separate information fields to be understood by computers in order perform much of the perfunctory background operations involved in finding, sharing, and combining information from the Web to render as usable knowledge. The Semantic Web continues to fall short of many of its highest ambitions to date, but is still supported by Tim Berners-Lee and World Wide Web Consortium (WWWC).  Criticism about its practical feasibility, lagging progress, security and privacy, and need for additional markup has yet to be resolved. Regardless, many still see the semantic content as Web 3.0, the next evolution Web. It promises to provide intelligence and interaction for semantic publishing in scientific research, exposing experimental data as structured information for real-time sharing of information by researchers. The Semantic Web is the only strategy proffered at this time to furnish intelligence and context across disparate Web information systems. Structured information based on context and meaning of information seems to be the only way to control Web data as it grows exponentially. Semantic publishing promises application interoperability and efficient, automated data integration. One of the more realistic components of the Semantic Web is Linked Data, an environment that sees all entities on the Web as individual objects. These objects can then be intelligently combined and repurposed to create a new service, product, or knowledge set.
  • What is Linked Data? Linked data simplifies much of the complexity of the Semantic Web. In contrast to the full Semantic Web, linked data publishes structured data using URI addresses rather than relying on a hierarchical (called ontological) cascading of parent/child relationships established by semantic markup. Using URIs, linked data handles everything on the Web as an object to be formed and presented as knowledge or actuated as new services and products. The following diagram represents the standard concept behind Linked Data. Disparate data sources from data stores, documents, Web sites, and other cloud and internal data repositories come together organically to grow a new, fully-functional knowledge set, service, or product. The main goals and concepts behind linked data includes these four principles: Use URIs to identify web objects. Use HTTP URIs to refer to and searched by readers and user agents from other applications. Provide useful information (e.g., a structured description or metadata) when the URI is referenced. Links to related URIs to expose data to improve searches of related information on the Web.
  • ...1 more annotation...
  • Linked Data is about using HTTP referencable Names (generic HTTP scheme URIs) for Data Objects / Data Items, that de-reference to HTTP accessible Data Sources (via Locator oriented HTTP scheme URIs commonly known as URLs). There is a duality to the Generic HTTP scheme URI that enables Identity (Name)/ Access (Address) to exist in a single unit. Its this duality that enables the Linked Data magic whereby de-referencing a Data Objects Names results in access to the a structured data representation of its description (Metadata – a constellation of data relations that describe said Data Object). Bearing in mind the above, Linked Data basically delivers the ability to Mesh disparate data sources rather that settling for brute-force data mashing as exemplified by Mashups Enterprises have to look to work with disparate data sources in the the same way they work with data, hosted in a single DBMS from a single vendor i.e., they should look to JOIN structured Data en route to constructing holistic views over their data silos etc.. Of course, the same applies to the public Web since it too is proliferated with Data Silos courtesy of Web 2.0.
1More

The web as a CMS - 0 views

  • In exchange, MusicBrainz receives a monthly license fee that will allow MetaBrainz to hire some engineering help in the coming months to work on new features and to improve the existing infrastructure. This is quite significant since MusicBrainz has been resource constrained for many months now — having paid people on staff will ensure a more reasonable amount of progress moving forward. Even cooler, the BBC online music editors will soon participate in the MusicBrainz community contributing their knowledge to MusicBrainz. The goal is to have the BBC /music editorial team round out and add new information to MusicBrainz as they need to use it in their MusicBrainz enabled applications internally.
1More

D2R Server - Publishing Relational Databases on the Semantic Web - 0 views

  • D2R Server is a tool for publishing relational databases on the Semantic Web. It enables RDF and HTML browsers to navigate the content of the database, and allows applications to query the database using the SPARQL query language.
1More

Joseki - A SPARQL Server for Jena - 0 views

  • Joseki is an HTTP engine that supports the SPARQL Protocol and the SPARQL RDF Query language. SPARQL is developed by the W3C RDF Data Access Working Group. Joseki Features: RDF Data from files and databases HTTP (GET and POST) implementation of the SPARQL protocol
1More

RelFinder - Interactive Relationship Discovery in RDF Datasets - 0 views

  • Are you interested in how things are related with each other? The RelFinder helps to get an overview: It extracts and visualizes relationships between given objects in datasets and makes these relationships interactively explorable. Highlighting and filtering features support analysis both on a global and detailed level. The RelFinder is based on the open source framework Adobe Flex, easy-to-use and works on any RDF dataset that provides standardized SPARQL access.
1More

NOSQLEU - Graph Databases and Neo4j - 0 views

  •  
    Slide Show
1More

InfoQ: Gremlin, a Language for Working with Graphs - 0 views

  • Gremlin is a Turing Complete programming language useful for working with graphs. It is a Java DSL that makes extensive use of XPath to query, analyze and manipulate graphs. Gremlin can be used to create multi-relational graphs. Because the elements of the graph, vertices and edges, have properties defined as key-value pairs, the graph is called a property graph, and this is an example: The language has the following types: graph: a graph is composed of a set of vertices and a set of edges. vertex: a vertex is composed of a set of outgoing edges, incoming edges, and a map of properties. edge: an edge is composed of an outgoing vertex, incoming vertex, and a map of properties. boolean: a boolean can either be true or false. number: a number is a natural (integer) or real (double) number. string: a string is an array of characters. list: a list is an ordered collection of potentially duplicate objects. map: a map is an associative array from a set of object keys to a c
1More

InfoQ: Neo4j: Java-based NoSQL Graph Database - 0 views

  • Neo4j addresses the problem of performance degradation over queries that involve many joins in a traditional RDBMS. By modeling the data around graphs, Neo4j can traverse along nodes and edges with the same speed, independently of the amount of data constituting the graph. This gives secondary effects like very fast graph algos, recommender systems and OLAP-style analytics that are currently not possible with normal RDBMS setups.
1More

NoSQL in the Enterprise - 0 views

  •  
    In this article, Sourav Mazumder explores what NoSQL databases are, how they fit into Enterprise IT, the challenges facing enterprise adoption, how to choose the appropriate NoSQL database for a given application, a short list of NoSQL databases which are likely to be good matches for enterprise applications, and advice for how to adopt NoSQL databases within an enterprise.
1More

Introduction to Linked Data - 0 views

  •  
    The Best definition of Linked Data!!!
1More

What Pull and the Semantic Web Mean for Small Business, Part I : The World :: American ... - 0 views

  • We are shifting from pushing information to pulling it. Media companies know that they will have to give their consumers full control to pull stories, news, music, television, movies, and all their content to them when they want it, where they want it, the way they want it. This shift is coming to all industries, from medicine to banking to cars and toys. 2.    The shift has already begun. The financial reporting industry has already embraced the semantic web and has built the largest commercial ecosystem of online, interoperable data so far. Many other industries are gearing up to follow. Your industry is no exception. 3.    Products will be pulled. As you take a product off the shelf, you’ll be pulling on the entire supply chain, causing a ripple effect back to the manufacturer. Your customers will start pulling your products from you, rather than you pushing them. Customers will be much more involved in product development; cycles will shorten dramatically. 4.    Big hint: This goes for marketing, too. Push marketing will become less and less relevant. Customers will say what they are looking for and you will have to respond on their terms, not yours. 5.    Services will be pulled as well. A building will order its own supplies and maintenance services. Your services may be combined with those of your competitor without you even knowing it. Customers will dictate the terms and you will have to go along. 6.    Most business processes will invert. The sales-oriented or solutions-oriented culture many companies have will be much less profitable, as market and customer-driven processes rise. Customers will soon be in a position to enter their desires in a way that can fit into your manufacturing or production processes directly. 7.    Customers will become ten times more powerful!They will keep the customer-relationship on their side. Projects like the VRM (Vendor Relationship Management) Project at Harvard and OpenID will put customers much more in charge. 8.    Pulling will have a profound impact on our entire economy. More than $3 trillion of our economy will be affected. Everything from retail to health care to the IRS will be streamlined by the principles of pull. 9.    Watch for the warning signs of push cancer. Push is about vendor lock-in and proprietary data strategies. Apple and Facebook are push companies—they will both face strong competition from the open web. They will both have to change their cultures to embrace pull or they will become much smaller. 10.   Watch for the signs of pull. Pull is the natural way businesses should work to put the customer first. When you hear about companies opening their data, giving customers account portability, and talking about interoperability with competitors, that’s the language of pull. Google embraces some of the principles of pull, but we’ll see how far they go. There is a lot at stake—learn now to open your corporate culture to the principles that will shape successful 21st century companies.
1More

Neo4j with Spring - IMDB Example - 0 views

  • These pages will guide you through an example web application using the IMDB dataset. The aim is to show an example web application with an architecture based on Spring, which is close to what a real life example could look like.
2More

Web 3.0 Explained - 0 views

  • Im sure a lot of you out there have heard the term Web 2.0, but today I wanted to explain Web 3.0. Web 3.0 is a very interesting term that talks about the semantic web. The below image better explains the differences between Web 1.0, Web 2.0 and Web 3.0. Web 1.0 – Web 1.0 was all about static content which was read-only. Best examples of web 1.0 was geocities and hotmail, which both have great static html websites with read-only content. People preferred navigating the web through link directories of Yahoo! and dmoz. Web 2.0 – Web 2.0 is about user generated content and the read-write web. People are consuming as well as contributing information through blogs or sites like Flickr, YouTube, Digg, etc. The line dividing a consumer and content publisher is increasingly getting blurred in the Web 2.0 era. Web 3.0 – This will be about semantic web (or the meaning of data), personalization (e.g. iGoogle), intelligent search and behavioral advertising among other things.
  •  
    ANALYSIS Web 3.0 Explained APRIL 5, 2010 4:12 AM STEVEN FINCH 6 COMMENTS Im sure a lot of you out there have heard the term Web 2.0, but today I wanted to explain Web 3.0. Web 3.0 is a very interesting term that talks about the semantic web. The below image better explains the differences between Web 1.0, Web 2.0 and Web 3.0.
1More

Franz Inc. Web 3.0's Database - 0 views

  •  
    AllegroGraph
1More

NoSQL Graph Database Comparison | Javalobby - 0 views

  • A few days ago I published a short overview of the most trendy graph databases. Today I'm bringing you a review of the most important features of them. As you can see the current ecosystem is quite bit, without general uniformity, although this is normal when analyzing an ongoing technology movement.  As you can see in the previous table,  there are substantial differences that can help our projects. Next we are going to analyze these main differences.
1More

Neo4j Blog: The top 10 ways to get to know Neo4j - 0 views

  • The common domain implementation pattern when using Neo4j is to let the domain objects wrap a node, and store the state of the entity in the node properties. To relieve you from the boilerplate code needed for this, you can use a framework like jo4neo (intro, blog posts), where you use annotations to declare properties and relationships, but still have the full power of the graph database available for deep traversals and other graphy stuff. Here's a code sample showing jo4neo in action:view sourceprint?public class Person {  //used by jo4neo  transient Nodeid node;  //simple property  @neo String firstName;  //helps you store a java.util.Date to neo4j  @neo Date date;  // jo4neo will index for you  @neo(index=true) String email;  // many to many relation  @neo Collection<role> roles;    /* normal class oriented  * programming stuff goes here  */}</role>
1More

Lars Kirchhoff [Web Journal] - - 0 views

  • For more then half a year we are running a research project about social networks within the blogosphere. The social network analysis is only one step to get information about the topic flow within a certain blog networks. We used Technorati as a source for the detection of the blog networks using a snow ball approach and than crawled the found blog nodes to identify the network edges. So far we have five sample networks analyzed ranging from 300 to 14'000 nodes with more than 200'000 edges. One task within the project is the visualization of these networks with appropriate tools that enable the easy access to the gathered information. Various levels of detail are needed to extract and highlight different network parameter and make them easily understandable. Therefore I did a research on current tools available.
1More

MySQL vs. Neo4j on a Large-Scale Graph Traversal - 0 views

  • Traversing the Graph The traversal that was evaluated on each database started from some root vertex and emanated n-steps out. There was no sorting, no distinct-ing, etc. The only two variables for the experiments are the length of the traversal and the root vertex to start the traversal from. In MySQL, the following 5 queries denote traversals of length 1 through 5. Note that the "?" is a variable parameter of the query that denotes the root vertex.     SELECT a.inV FROM graph as a WHERE a.outV=?     SELECT b.inV FROM graph as a, graph as b WHERE a.inV=b.outV AND a.outV=?     SELECT c.inV FROM graph as a, graph as b, graph as c WHERE a.inV=b.outV AND b.inV=c.outV AND a.outV=?     SELECT d.inV FROM graph as a, graph as b, graph as c, graph as d WHERE a.inV=b.outV AND b.inV=c.outV AND c.inV=d.outV AND a.outV=?     SELECT e.inV FROM graph as a, graph as b, graph as c, graph as d, graph as e WHERE a.inV=b.outV AND b.inV=c.outV AND c.inV=d.outV AND d.inV=e.outV AND a.outV=? For Neo4j, the Blueprints Pipes framework was used. A pipe of length n was constructed using the following static method.     public static Pipeline createPipeline(final Integer steps) {         final ArrayList<Pipe> pipes = new ArrayList<Pipe>();         for (int i = 0; i < steps; i++) {             Pipe pipe1 = new VertexEdgePipe(VertexEdgePipe.Step.OUT_EDGES);             Pipe pipe2 = new EdgeVertexPipe(EdgeVertexPipe.Step.IN_VERTEX);             pipes.add(pipe1);             pipes.add(pipe2);         }         return new Pipeline(pipes);     } For both MySQL and Neo4j, the results of the query (SQL and Pipes) were iterated through. Thus, all results were retrieved for each query. In MySQL, this was done as follows.     while (resultSet.next()) {         resultSet.getInt(finalColumn);     } In Neo4j, this is done as follows.     while (pipeline.hasNext()) {         pipeline.next();     }
1More

InfoQ: Graph Databases, NOSQL and Neo4j - 0 views

  •  
    "Example - the MATRIX The Graph As mentioned before, Social Networks represent just a tiny fraction of the applications of graph databases, but they are easy to understand for this example. To demonstrate the basic functionality of Neo4j, below is a small graph from the Matrix movie, visualized with the Eclipse RCP based Neoclipse for Neo4j: The graph is connected to a known reference node (id=0) for convenience in order to find the way into the network from a known starting point. This is not necessary, but has proven very usable in practice. The Java implementation looks something like this: Create a new graph database in folder "target/neo" EmbeddedGraphDatabase graphdb = new EmbeddedGraphDatabase("target/neo"); Relationship types can be created on-the-fly: RelationshipType KNOWS = DynamicRelationshipType.withName("KNOWS"); or via typesafe Java Enum: enum Relationships implements RelationshipType { KNOWS, INLOVE, HAS_CODED, MATRIX } Now, create two nodes and attach a "name" property to each of them. Then, connect these nodes witha KNOWS relationship: Node neo = graphdb.createNode(); node.setProperty("name", "Neo"); Node morpheus = graphdb.createNode(); morpheus.setProperty("name", "Morpheus"); neo.createRelationshipTo(morpheus, KNOWS); Any operation modifying the graph or needing isolation levels for data is wrapped in a transaction, so rollback and recovery work out of the box: Transaction tx = graphdb.beginTx(); try { Node neo = graphdb.createNode(); ... tx.success(); } catch (Exception e) { tx.failure(); } finally { tx.finish(); } The full code to create the Matrix graph the looks something like this: graphdb = new EmbeddedGraphDatabase("target/neo4j"); index = new LuceneIndexService(graphdb); Transaction tx = graphdb.beginTx(); try { Node root = graphdb.getReferenceNode(); // we connect Neo with the root node, to gain an entry point to the graph // not neccessary but practical. neo = createAndConnectNode("Neo", root, MATRIX); Node mo
‹ Previous 21 - 40 Next ›
Showing 20 items per page