Skip to main content

Home/ Future of the Web/ Group items tagged semantic

Rss Feed Group items tagged

Gary Edwards

SVG Is The Future Of Application Development | SitePoint » - 0 views

  •  
    I could see this coming a mile away, ana it's about time! ".... So if HTML can't deliver for us here, what will? Microsoft wants us to use Silverlight and Adobe wants us to use Flash and AIR, of course. And Apple…? Apple ostensibly wants us to use HTML5's canvas. Both Microsoft's and Adobe's contenders are proprietary, which seems to be reason enough for web developers to avoid them to a certain degree, and all of them muddy HTML, which is a dangerous thing for the semantic web. But Apple actually has a trick up its sleeve. Like Mozilla's been doing with Firefox, Apple has quietly been implementing better support for SVG, the W3C's Recommendation for XML-based vector graphics, into WebKit. SVG delivers the same kind of vector graphics capabilities that Flash does, but it does so using all the interoperability benefits that XML brings along for the ride. SVG is great for graphically displaying both text and images, manipulating them with declarative visual primitives, and it comes with a host of lickable effects. Ironically, SVG was originally jointly developed by both Adobe and Sun Microsystems but recently it's Sun Labs that has been doing interesting stuff with the technology. The most compelling experiment of this kind has to be Sun Labs's Lively Kernel project....."
Paul Merrell

IDABC - Revision of the EIF and AG - 0 views

  • In 2006, the European Commission has started the revision of the European Interoperability Framework (EIF) and the Architecture Guidelines (AG).
  • The European Commission has started drafting the EIF v2.0 in close cooperation with the concerned Commission services and with the Members States as well as with the Candidate Countries and EEA Countries as observers.
  • A draft document from which the final EIF V2.0 will be elaborated was available for external comments till the 22nd September. The proposal for the new EIF v2.0 that has been subject to consultation, is available: [3508 Kb]
  •  
    This planning document forms the basis for the forthcoming work to develop European Interoperability Framework v. 2.0. It is the overview of things to come, so to speak. Well worth the read to see how SOA concepts are evolving at the bleeding edge. But also noteworthy for the faceted expansion in the definition of "interoperability," which now includes: [i] political context; [ii] legal interop; [iii] organizational interop; [iv] semantic interop; and [v] technical interop. A lot of people talk the interop talk; this is a document from people who are walking the interop walk, striving to bring order out of the chaos of incompatible ICT systems across the E.U.
  •  
    Full disclosure: I submitted detailed comments on the draft of the subject document on behalf of the Universal Interoperability Council. One theme of my comments was embraced in this document: the document recognizes human-machine interactions as a facet of interoperability, moving accessibility and usability from sideshow treatment in the draft to part of the technical interop dimension of the plan.
Paul Merrell

A Survey and Analysis of Electronic Business Document Standards - 0 views

  • Kabak Y., Dogac A. A Survey and Analysis of Electronic Business Document Standards Under revision.
  •  
    Thorough academic overview of interoperability and transformability aspects of five electronic business document standards identified in the tags for this bookmark. Published in 2008, but undergoing revision. "As a final word, although the electronic document standards developed so far proved to be very useful for industry and government applications, further efforts are needed for their harmonization and semantic interoperability."
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Paul Merrell

The Latest Rules on How Long NSA Can Keep Americans' Encrypted Data Look Too Familiar |... - 0 views

  • Does the National Security Agency (NSA) have the authority to collect and keep all encrypted Internet traffic for as long as is necessary to decrypt that traffic? That was a question first raised in June 2013, after the minimization procedures governing telephone and Internet records collected under Section 702 of the Foreign Intelligence Surveillance Act were disclosed by Edward Snowden. The issue quickly receded into the background, however, as the world struggled to keep up with the deluge of surveillance disclosures. The Intelligence Authorization Act of 2015, which passed Congress this last December, should bring the question back to the fore. It established retention guidelines for communications collected under Executive Order 12333 and included an exception that allows NSA to keep ‘incidentally’ collected encrypted communications for an indefinite period of time. This creates a massive loophole in the guidelines. NSA’s retention of encrypted communications deserves further consideration today, now that these retention guidelines have been written into law. It has become increasingly clear over the last year that surveillance reform will be driven by technological change—specifically by the growing use of encryption technologies. Therefore, any legislation touching on encryption should receive close scrutiny.
  • Section 309 of the intel authorization bill describes “procedures for the retention of incidentally acquired communications.” It establishes retention guidelines for surveillance programs that are “reasonably anticipated to result in the acquisition of [telephone or electronic communications] to or from a United States person.” Communications to or from a United States person are ‘incidentally’ collected because the U.S. person is not the actual target of the collection. Section 309 states that these incidentally collected communications must be deleted after five years unless they meet a number of exceptions. One of these exceptions is that “the communication is enciphered or reasonably believed to have a secret meaning.” This exception appears to be directly lifted from NSA’s minimization procedures for data collected under Section 702 of FISA, which were declassified in 2013. 
  • While Section 309 specifically applies to collection taking place under E.O. 12333, not FISA, several of the exceptions described in Section 309 closely match exceptions in the FISA minimization procedures. That includes the exception for “enciphered” communications. Those minimization procedures almost certainly served as a model for these retention guidelines and will likely shape how this new language is interpreted by the Executive Branch. Section 309 also asks the heads of each relevant member of the intelligence community to develop procedures to ensure compliance with new retention requirements. I expect those procedures to look a lot like the FISA minimization guidelines.
  • ...6 more annotations...
  • This language is broad, circular, and technically incoherent, so it takes some effort to parse appropriately. When the minimization procedures were disclosed in 2013, this language was interpreted by outside commentators to mean that NSA may keep all encrypted data that has been incidentally collected under Section 702 for at least as long as is necessary to decrypt that data. Is this the correct interpretation? I think so. It is important to realize that the language above isn’t just broad. It seems purposefully broad. The part regarding relevance seems to mirror the rationale NSA has used to justify its bulk phone records collection program. Under that program, all phone records were relevant because some of those records could be valuable to terrorism investigations and (allegedly) it isn’t possible to collect only those valuable records. This is the “to find a needle a haystack, you first have to have the haystack” argument. The same argument could be applied to encrypted data and might be at play here.
  • This exception doesn’t just apply to encrypted data that might be relevant to a current foreign intelligence investigation. It also applies to cases in which the encrypted data is likely to become relevant to a future intelligence requirement. This is some remarkably generous language. It seems one could justify keeping any type of encrypted data under this exception. Upon close reading, it is difficult to avoid the conclusion that these procedures were written carefully to allow NSA to collect and keep a broad category of encrypted data under the rationale that this data might contain the communications of NSA targets and that it might be decrypted in the future. If NSA isn’t doing this today, then whoever wrote these minimization procedures wanted to at least ensure that NSA has the authority to do this tomorrow.
  • There are a few additional observations that are worth making regarding these nominally new retention guidelines and Section 702 collection. First, the concept of incidental collection as it has typically been used makes very little sense when applied to encrypted data. The way that NSA’s Section 702 upstream “about” collection is understood to work is that technology installed on the network does some sort of pattern match on Internet traffic; say that an NSA target uses example@gmail.com to communicate. NSA would then search content of emails for references to example@gmail.com. This could notionally result in a lot of incidental collection of U.S. persons’ communications whenever the email that references example@gmail.com is somehow mixed together with emails that have nothing to do with the target. This type of incidental collection isn’t possible when the data is encrypted because it won’t be possible to search and find example@gmail.com in the body of an email. Instead, example@gmail.com will have been turned into some alternative, indecipherable string of bits on the network. Incidental collection shouldn’t occur because the pattern match can’t occur in the first place. This demonstrates that, when communications are encrypted, it will be much harder for NSA to search Internet traffic for a unique ID associated with a specific target.
  • This lends further credence to the conclusion above: rather than doing targeted collection against specific individuals, NSA is collecting, or plans to collect, a broad class of data that is encrypted. For example, NSA might collect all PGP encrypted emails or all Tor traffic. In those cases, NSA could search Internet traffic for patterns associated with specific types of communications, rather than specific individuals’ communications. This would technically meet the definition of incidental collection because such activity would result in the collection of communications of U.S. persons who aren’t the actual targets of surveillance. Collection of all Tor traffic would entail a lot of this “incidental” collection because the communications of NSA targets would be mixed with the communications of a large number of non-target U.S. persons. However, this “incidental” collection is inconsistent with how the term is typically used, which is to refer to over-collection resulting from targeted surveillance programs. If NSA were collecting all Tor traffic, that activity wouldn’t actually be targeted, and so any resulting over-collection wouldn’t actually be incidental. Moreover, greater use of encryption by the general public would result in an ever-growing amount of this type of incidental collection.
  • This type of collection would also be inconsistent with representations of Section 702 upstream collection that have been made to the public and to Congress. Intelligence officials have repeatedly suggested that search terms used as part of this program have a high degree of specificity. They have also argued that the program is an example of targeted rather than bulk collection. ODNI General Counsel Robert Litt, in a March 2014 meeting before the Privacy and Civil Liberties Oversight Board, stated that “there is either a misconception or a mischaracterization commonly repeated that Section 702 is a form of bulk collection. It is not bulk collection. It is targeted collection based on selectors such as telephone numbers or email addresses where there’s reason to believe that the selector is relevant to a foreign intelligence purpose.” The collection of Internet traffic based on patterns associated with types of communications would be bulk collection; more akin to NSA’s collection of phone records en mass than it is to targeted collection focused on specific individuals. Moreover, this type of collection would certainly fall within the definition of bulk collection provided just last week by the National Academy of Sciences: “collection in which a significant portion of the retained data pertains to identifiers that are not targets at the time of collection.”
  • The Section 702 minimization procedures, which will serve as a template for any new retention guidelines established for E.O. 12333 collection, create a large loophole for encrypted communications. With everything from email to Internet browsing to real-time communications moving to encrypted formats, an ever-growing amount of Internet traffic will fall within this loophole.
  •  
    Tucked into a budget authorization act in December without press notice. Section 309 (the Act is linked from the article) appears to be very broad authority for the NSA to intercept any form of telephone or other electronic information in bulk. There are far more exceptions from the five-year retention limitation than the encrypted information exception. When reading this, keep in mind that the U.S. intelligence community plays semantic games to obfuscate what it does. One of its word plays is that communications are not "collected" until an analyst looks at or listens to partiuclar data, even though the data will be searched to find information countless times before it becomes "collected." That searching was the major basis for a decision by the U.S. District Court in Washington, D.C. that bulk collection of telephone communications was unconstitutional: Under the Fourth Amendment, a "search" or "seizure" requiring a judicial warrant occurs no later than when the information is intercepted. That case is on appeal, has been briefed and argued, and a decision could come any time now. Similar cases are pending in two other courts of appeals. Also, an important definition from the new Intelligence Authorization Act: "(a) DEFINITIONS.-In this section: (1) COVERED COMMUNICATION.-The term ''covered communication'' means any nonpublic telephone or electronic communication acquired without the consent of a person who is a party to the communication, including communications in electronic storage."       
Gary Edwards

WebKit, AJAX and ARAX | Readers Welcome ARAX and More: Darryl Taft follow-up zdnet - 0 views

  • A commenter on the ARAX article on eWEEK's site named Gary Edwards said, "It seems to me that Adobe and Microsoft are using the browser plug-in model as a distribution channel for their proprietary run-time engines. Or should we call them VMs [virtual machines]? "The easiest way for Web developers to sidestep problematic browser wars, and still be able to push the envelope of the interactive Web, may well be to write to a universal run-time plug-in like Adobe AIR or Microsoft Silverlight. IMHO, the 'browser' quickly fades away once this direct development sets in." Moreover, Edwards said, "Although there are many ways to slice this discussion, it might be useful to compare Adobe RIA [Rich Internet Applications] and Microsoft Silverlight RIA in terms of Web-ready, highly interactive documents. The Adobe RIA story is quite different from that of Silverlight. Both however exploit the shortcomings of browsers; shortcomings that are in large part, I think, due to the disconnect the browser community has had with the W3C [World Wide Web Consortium]. The W3C forked off the HTML-CSS [Cascading Style Sheets] path, putting the bulk of their attention into XML, RDF and the Semantic Web. The Web developer community stayed the course, pushing the HTML-CSS envelope with JavaScript and some rather stunning CSS magic. "Adobe seems to have picked up the HTML-CSS-JavaScript trail with a Microsoft innovation to take advantage of browser cache, DHTML (Dynamic HTML). DHTML morphs into AJAX, (which [is] so wild as to have difficulty scaling). And AJAX gets tamed by an Adobe-Apple sponsored WebKit."
  •  
    Darryl Taft writes a follow up article covering the comments to his original AJAX-ARAX ruby on rails MS-iron python story
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Gary Edwards

Siding with HTML over XHTML, My Decision to Switch - Monday By Noon - 0 views

  • Publishing content on the Web is in no way limited to professional developers or designers, much of the reason the net is so active is because anyone can make a website. Sure, we (as knowledgeable professionals or hobbyists) all hope to make the Web a better place by doing our part in publishing documents with semantically rich, valid markup, but the reality is that those documents are rare. It’s important to keep in mind the true nature of the Internet; an open platform for information sharing.
  • XHTML2 has some very good ideas that I hope can become part of the web. However, it’s unrealistic to think that all web authors will switch to an XML-based syntax which demands that browsers stop processing the document on the first error. XML’s draconian policy was an attempt to clean up the web. This was done around 1996 when lots of invalid content entered the web. CSS took a different approach: instead of demanding that content isn’t processed, we defined rules for how to handle the undefined. It’s called “forward-compatible parsing” and means we can add new constructs without breaking the old. So, I don’t think XHTML is a realistic option for the masses. HTML 5 is it.
    • Gary Edwards
       
      Great quote from CSS expert Hakon Wium Lie.
  • @marbux: Of course i disagree with your interop assessment, but I wondered how it is that you’re missing the point. I think you confuse web applications with legacy desktop – client/server application model. And that confusion leads to the mistake of trying to transfer the desktop document model to one that could adequately service advancing web applications.
  •  
    A CMS expert argues for HTML over XHTML, explaining his reasons for switching. Excellent read! He nails the basics. for similar reasons, we moved from ODF to ePUB and then to CDf and finally to the advanced WebKit document model, where wikiWORD will make it's stand.
  •  
    See also my comment on the same web page that explains why HTML 5 is NOT it for document exchange between web editing applications. .
  •  
    Response to marbux supporting the WebKit layout/document model. Marbux argues that HTML5 is not interoperable, and CSS2 near useless. HTML5 fails regarding the the interop web appplications need. I respond by arguing that the only way to look at web applications is to consider that the browser layout engine is the web application layout engine! Web applications are actually written to the browser layout/document model, OR, to take advantage of browser plug-in capabilities. The interoperability marbux seeks is tied directly to the browser layout engine. In this context, the web format is simply a reflection of that layout engine. If there's an interop problem, it comes from browser madness differentials. The good news is that there are all kinds of efforts to close the browser gap: including WHATWG - HTML5, CSS3, W3C DOM, JavaScript Libraries, Google GWT (Java to JavaScript), Yahoo GUI, and the my favorite; WebKit. The bad news is that the clock is ticking. Microsoft has pulled the trigger and the great migration of MSOffice client/server systems to the MS WebSTack-Mesh architecture has begun. Key to this transition are the WPF-.NET proprietary formats, protocols and interfaces such as XAML, Silverlight, LINQ, and Smart Tags. New business processes are being written, and old legacy desktop bound processes are being transitioned to this emerging platform. The fight for the Open Web is on, with Microsoft threatening to transtion their entire business desktop monopoly to a Web platfomr they own. ~ge~
Paul Merrell

Accessible Rich Internet Applications (WAI-ARIA) Version 1.0 - 0 views

  • Accessibility of Web content to people with disabilities requires semantic information about widgets, structures, and behaviors, in order to allow Assistive Technologies to make appropriate transformations. This specification provides an ontology of roles, states, and properties that set out an abstract model for accessible interfaces and can be used to improve the accessibility and interoperability of Web Content and Applications. This information can be mapped to accessibility frameworks that use this information to provide alternative access solutions. Similarly, this information can be used to change the rendering of content dynamically using different style sheet properties. The result is an interoperable method for associating behaviors with document-level markup. This document is part of the WAI-ARIA suite described in the ARIA Overview.
  •  
    New working draft from W3C. See also details of the call for comment: http://lists.w3.org/Archives/Public/w3c-wai-ig/2008JulSep/0034
Paul Merrell

OWL 2 Web Ontology Language:New Features and Rationale - 0 views

  • Abstract OWL 2 extends the W3C OWL Web Ontology Language with a small but useful set of features that have been requested by users, for which effective reasoning algorithms are now available, and that OWL tool developers are willing to support. The new features include extra syntactic sugar, additional property and qualified cardinality constructors, extended datatype support, simple metamodeling, and extended annotations. This document is a simple introduction to the new features of the OWL 2 Web Ontology Language, including an explanation of its differences with respect to OWL 1. It also presents the requirements that have motivated the design of the main new features, and their rationale from a theoretical and implementation perspective.
Paul Merrell

The Business Of IT: Gartner Reveals Top 10 Technologies - 0 views

  • The good folks over at the Gartner Group have revealed the top 10 technologies that they believe will change the world over the next four years:Multicore and hybrid processorsVirtualization and fabric computingSocial networks and social softwareCloud computing and cloud/Web platformsWeb mashupsUser InterfaceUbiquitous computingContextual computingAugmented realitySemantics
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Paul Merrell

W3C Issues Report on Web and Television Convergence - 0 views

  • 28 March 2011 -- The Web and television convergence story was the focus of W3C's Second Web and TV Workshop, which took place in Berlin in February. Today, W3C publishes a report that summarizes the discussion among the 77 organizations that participated, including broadcasters, telecom companies, cable operators, OTT (over the top) companies, content providers, device vendors, software vendors, Web application providers, researchers, governments, and standardization organizations active in the TV space. Convergence priorities identified in the report include: Adaptive streaming over HTTP Home networking and second-screen scenarios The role of metadata and relation to Semantic Web technology Ensuring that convergent solutions are accessible. Profiling and testing Possible extensions to HTML5 for Television
Paul Merrell

Is Apple an Illegal Monopoly? | OneZero - 0 views

  • That’s not a bug. It’s a function of Apple policy. With some exceptions, the company doesn’t let users pay app makers directly for their apps or digital services. They can only pay Apple, which takes a 30% cut of all revenue and then passes 70% to the developer. (For subscription services, which account for the majority of App Store revenues, that 30% cut drops to 15% after the first year.) To tighten its grip, Apple prohibits the affected apps from even telling users how they can pay their creators directly.In 2018, unwilling to continue paying the “Apple tax,” Netflix followed Spotify and Amazon’s Kindle books app in pulling in-app purchases from its iOS app. Users must now sign up elsewhere, such as on the company’s website, in order for the app to become usable. Of course, these brands are big enough to expect that many users will seek them out anyway.
  • Smaller app developers, meanwhile, have little choice but to play by Apple’s rules. That’s true even when they’re competing with Apple’s own apps, which pay no such fees and often enjoy deeper access to users’ devices and information.Now, a handful of developers are speaking out about it — and government regulators are beginning to listen. David Heinemeier Hansson, the co-founder of the project management software company Basecamp, told members of the U.S. House antitrust subcommittee in January that navigating the App Store’s fees, rules, and review processes can feel like a “Kafka-esque nightmare.”One of the world’s most beloved companies, Apple has long enjoyed a reputation for user-friendly products, and it has cultivated an image as a high-minded protector of users’ privacy. The App Store, launched in 2008, stands as one of its most underrated inventions; it has powered the success of the iPhone—perhaps the most profitable product in human history. The concept was that Apple and developers could share in one another’s success with the iPhone user as the ultimate beneficiary.
  • But critics say that gauzy success tale belies the reality of a company that now wields its enormous market power to bully, extort, and sometimes even destroy rivals and business partners alike. The iOS App Store, in their telling, is a case study in anti-competitive corporate behavior. And they’re fighting to change that — by breaking its choke hold on the Apple ecosystem.
  • ...4 more annotations...
  • Whether Apple customers have a real choice in mobile platforms, once they’ve bought into the company’s ecosystem, is another question. In theory, they could trade in their pricey hardware for devices that run Android, which offers equivalents of many iOS features and apps. In reality, Apple has built its empire on customer lock-in: making its own gadgets and services work seamlessly with one another, but not with those of rival companies. Tasks as simple as texting your friends can become a migraine-inducing mess when you switch from iOS to Android. The more Apple products you buy, the more onerous it becomes to abandon ship.
  • The case against Apple goes beyond iOS. At a time when Apple is trying to reinvent itself as a services company to offset plateauing hardware sales — pushing subscriptions to Apple Music, Apple TV+, Apple News+, and Apple Arcade, as well as its own credit card — the antitrust concerns are growing more urgent. Once a theoretical debate, the question of whether its App Store constitutes an illegal monopoly is now being actively litigated on multiple fronts.
  • The company faces an antitrust lawsuit from consumers; a separate antitrust lawsuit from developers; a formal antitrust complaint from Spotify in the European Union; investigations by the Federal Trade Commission and the Department of Justice; and an inquiry by the antitrust subcommittee of the U.S House of Representatives. At stake are not only Apple’s profits, but the future of mobile software.Apple insists that it isn’t a monopoly, and that it strives to make the app store a fair and level playing field even as its own apps compete on that field. But in the face of unprecedented scrutiny, there are signs that the famously stubborn company may be feeling the pressure to prove it.
  • Tile is hardly alone in its grievances. Apple’s penchant for copying key features of third-party apps and integrating them into its operating system is so well-known among developers that it has a name: “Sherlocking.” It’s a reference to the time—in the early 2000s—when Apple kneecapped a popular third-party web-search interface for Mac OS X, called Watson. Apple built virtually all of Watson’s functionality into its own feature, called Sherlock.In a 2006 blog post, Watson’s developer, Karelia Software, recalled how Apple’s then-CEO Steve Jobs responded when they complained about the company’s 2002 power play. “Here’s how I see it,” Jobs said, according to Karelia founder Dan Wood’s loose paraphrase. “You know those handcars, the little machines that people stand on and pump to move along on the train tracks? That’s Karelia. Apple is the steam train that owns the tracks.”From an antitrust standpoint, the metaphor is almost too perfect. It was the monopoly power of railroads in the late 19th century — and their ability to make or break the businesses that used their tracks — that spurred the first U.S. antitrust regulations.There’s another Jobs quote that’s relevant here. Referencing Picasso’s famous saying, “Good artists copy, great artists steal,” Jobs said of Apple in 2006. “We have always been shameless about stealing great ideas.” Company executives later tried to finesse the quote’s semantics, but there’s no denying that much of iOS today is built on ideas that were not originally Apple’s.
‹ Previous 21 - 33 of 33
Showing 20 items per page