Skip to main content

Home/ Open Web/ Group items tagged working drafts

Rss Feed Group items tagged

Paul Merrell

First working draft of W3C HTML5 - 0 views

  • HTML5 A vocabulary and associated APIs for HTML and XHTML
  • This specification defines the 5th major revision of the core language of the World Wide Web: the Hypertext Markup Language (HTML). In this version, new features are introduced to help Web application authors, new elements are introduced based on research into prevailing authoring practices, and special attention has been given to defining clear conformance criteria for user agents in an effort to improve interoperability.
  • The W3C HTML Working Group is the W3C working group responsible for this specification's progress along the W3C Recommendation track. This specification is the 24 June 2010 Working Draft snapshot. Work on this specification is also done at the WHATWG. The W3C HTML working group actively pursues convergence with the WHATWG, as required by the W3C HTML working group charter.
Paul Merrell

Eight HTML5 Drafts Updated, W3C News Archive: 2010 W3C - 0 views

  • The HTML Working Group published eight documents: Working Drafts of the HTML5 specification, the accompanying explanatory document HTML5 differences from HTML4, and the related non-normative reference HTML: The Markup Language. Working Drafts of the specifications HTML+RDFa 1.1 and HTML Microdata, which define mechanisms for embedding machine-readable data in HTML documents, and the specification HTML Canvas 2D Context, which defines a 2D immediate-mode graphics API for use with the HTML5 <canvas> element. HTML5: Techniques for providing useful text alternatives, which is intended to help authors provide useful text alternatives for images in HTML documents. Polyglot Markup: HTML-Compatible XHTML Documents, which is intended to help authors produce XHTML documents that are also compatible with non-XML HTML syntax and parsing rules.
Gary Edwards

How Sir Tim Berners-Lee cut the Gordian Knot of HTML5 | Technology | guardian.co.uk - 0 views

  •  
    Good article with excellent URL references.  Bottom line is that the W3C will support the advance of HTML5 and controversial components such as "canvas", HTML + RDFa, and HTML microdata. excerpt: The key question is: who's going to get their way with HTML5? The companies who want to keep the kitchen sink in? Or those which want it to be a more flexible format which might also be able to displace some rather comfortable organisations that are doing fine with things as they are? Adobe, it turned out, seemed to be trying to slow things down a little. It was accused of trying to put HTML5 "on hold". It strongly denied it. Others said it was using "procedural bullshit". Then Berners-Lee weighed in with a post on the W3 mailing list. First he noted the history: "Some in the community have raised questions recently about whether some work products of the HTML Working Group are within the scope of the Group's charter. Specifically in question were the HTML Canvas 2D API, and the HTML Microdata and HTML+RDFa Working Drafts." (Translation: Adobe seems to have been trying to slow things down on at least one of these points.) And then he pushes: "I agree with the WG [working group] chairs that these items -- data and canvas - are reasonable areas of work for the group. It is appropriate for the group to publish documents in this area." Chop! And that's it. There goes the Gordian Knot. With that simple message, Berners-Lee has probably created a fresh set of headaches for Adobe - but it means that we can also look forward to a web with open standards, rather than proprietary ones, and where commercial interests don't get to push it around.
Paul Merrell

Popular Security Software Came Under Relentless NSA and GCHQ Attacks - The Intercept - 0 views

  • The National Security Agency and its British counterpart, Government Communications Headquarters, have worked to subvert anti-virus and other security software in order to track users and infiltrate networks, according to documents from NSA whistleblower Edward Snowden. The spy agencies have reverse engineered software products, sometimes under questionable legal authority, and monitored web and email traffic in order to discreetly thwart anti-virus software and obtain intelligence from companies about security software and users of such software. One security software maker repeatedly singled out in the documents is Moscow-based Kaspersky Lab, which has a holding registered in the U.K., claims more than 270,000 corporate clients, and says it protects more than 400 million people with its products. British spies aimed to thwart Kaspersky software in part through a technique known as software reverse engineering, or SRE, according to a top-secret warrant renewal request. The NSA has also studied Kaspersky Lab’s software for weaknesses, obtaining sensitive customer information by monitoring communications between the software and Kaspersky servers, according to a draft top-secret report. The U.S. spy agency also appears to have examined emails inbound to security software companies flagging new viruses and vulnerabilities.
  • The efforts to compromise security software were of particular importance because such software is relied upon to defend against an array of digital threats and is typically more trusted by the operating system than other applications, running with elevated privileges that allow more vectors for surveillance and attack. Spy agencies seem to be engaged in a digital game of cat and mouse with anti-virus software companies; the U.S. and U.K. have aggressively probed for weaknesses in software deployed by the companies, which have themselves exposed sophisticated state-sponsored malware.
  • The requested warrant, provided under Section 5 of the U.K.’s 1994 Intelligence Services Act, must be renewed by a government minister every six months. The document published today is a renewal request for a warrant valid from July 7, 2008 until January 7, 2009. The request seeks authorization for GCHQ activities that “involve modifying commercially available software to enable interception, decryption and other related tasks, or ‘reverse engineering’ software.”
  • ...9 more annotations...
  • The NSA, like GCHQ, has studied Kaspersky Lab’s software for weaknesses. In 2008, an NSA research team discovered that Kaspersky software was transmitting sensitive user information back to the company’s servers, which could easily be intercepted and employed to track users, according to a draft of a top-secret report. The information was embedded in “User-Agent” strings included in the headers of Hypertext Transfer Protocol, or HTTP, requests. Such headers are typically sent at the beginning of a web request to identify the type of software and computer issuing the request.
  • According to the draft report, NSA researchers found that the strings could be used to uniquely identify the computing devices belonging to Kaspersky customers. They determined that “Kaspersky User-Agent strings contain encoded versions of the Kaspersky serial numbers and that part of the User-Agent string can be used as a machine identifier.” They also noted that the “User-Agent” strings may contain “information about services contracted for or configurations.” Such data could be used to passively track a computer to determine if a target is running Kaspersky software and thus potentially susceptible to a particular attack without risking detection.
  • Another way the NSA targets foreign anti-virus companies appears to be to monitor their email traffic for reports of new vulnerabilities and malware. A 2010 presentation on “Project CAMBERDADA” shows the content of an email flagging a malware file, which was sent to various anti-virus companies by François Picard of the Montréal-based consulting and web hosting company NewRoma. The presentation of the email suggests that the NSA is reading such messages to discover new flaws in anti-virus software. Picard, contacted by The Intercept, was unaware his email had fallen into the hands of the NSA. He said that he regularly sends out notification of new viruses and malware to anti-virus companies, and that he likely sent the email in question to at least two dozen such outfits. He also said he never sends such notifications to government agencies. “It is strange the NSA would show an email like mine in a presentation,” he added.
  • As government spies have sought to evade anti-virus software, the anti-virus firms themselves have exposed malware created by government spies. Among them, Kaspersky appears to be the sharpest thorn in the side of government hackers. In the past few years, the company has proven to be a prolific hunter of state-sponsored malware, playing a role in the discovery and/or analysis of various pieces of malware reportedly linked to government hackers, including the superviruses Flame, which Kaspersky flagged in 2012; Gauss, also detected in 2012; Stuxnet, discovered by another company in 2010; and Regin, revealed by Symantec. In February, the Russian firm announced its biggest find yet: the “Equation Group,” an organization that has deployed espionage tools widely believed to have been created by the NSA and hidden on hard drives from leading brands, according to Kaspersky. In a report, the company called it “the most advanced threat actor we have seen” and “probably one of the most sophisticated cyber attack groups in the world.”
  • The Project CAMBERDADA presentation lists 23 additional AV companies from all over the world under “More Targets!” Those companies include Check Point software, a pioneering maker of corporate firewalls based Israel, whose government is a U.S. ally. Notably omitted are the American anti-virus brands McAfee and Symantec and the British company Sophos.
  • The NSA presentation goes on to state that its signals intelligence yields about 10 new “potentially malicious files per day for malware triage.” This is a tiny fraction of the hostile software that is processed. Kaspersky says it detects 325,000 new malicious files every day, and an internal GCHQ document indicates that its own system “collect[s] around 100,000,000 malware events per day.” After obtaining the files, the NSA analysts “[c]heck Kaspersky AV to see if they continue to let any of these virus files through their Anti-Virus product.” The NSA’s Tailored Access Operations unit “can repurpose the malware,” presumably before the anti-virus software has been updated to defend against the threat.
  • Hacks deployed by the Equation Group operated undetected for as long as 14 to 19 years, burrowing into the hard drive firmware of sensitive computer systems around the world, according to Kaspersky. Governments, militaries, technology companies, nuclear research centers, media outlets and financial institutions in 30 countries were among those reportedly infected. Kaspersky estimates that the Equation Group could have implants in tens of thousands of computers, but documents published last year by The Intercept suggest the NSA was scaling up their implant capabilities to potentially infect millions of computers with malware. Kaspersky’s adversarial relationship with Western intelligence services is sometimes framed in more sinister terms; the firm has been accused of working too closely with the Russian intelligence service FSB. That accusation is partly due to the company’s apparent success in uncovering NSA malware, and partly due to the fact that its founder, Eugene Kaspersky, was educated by a KGB-backed school in the 1980s before working for the Russian military.
  • Kaspersky has repeatedly denied the insinuations and accusations. In a recent blog post, responding to a Bloomberg article, he complained that his company was being subjected to “sensationalist … conspiracy theories,” sarcastically noting that “for some reason they forgot our reports” on an array of malware that trace back to Russian developers. He continued, “It’s very hard for a company with Russian roots to become successful in the U.S., European and other markets. Nobody trusts us — by default.”
  • Documents published with this article: Kaspersky User-Agent Strings — NSA Project CAMBERDADA — NSA NDIST — GCHQ’s Developing Cyber Defence Mission GCHQ Application for Renewal of Warrant GPW/1160 Software Reverse Engineering — GCHQ Reverse Engineering — GCHQ Wiki Malware Analysis & Reverse Engineering — ACNO Skill Levels — GCHQ
Paul Merrell

HTML+RDFa 1.1 - 16 views

  • HTML+RDFa 1.1Support for RDFa in HTML4 and HTML5W3C Working Draft 19 October 2010
  • This specification defines rules and guidelines for adapting the RDFa Core 1.1 specification for use in HTML5 and XHTML5. The rules defined in this specification not only apply to HTML5 documents in non-XML and XML mode, but also to HTML4 and XHTML documents interpreted through the HTML5 parsing rules.
  • This specification is an extension to the HTML5 language. All normative content in the HTML5 specification, unless specifically overridden by this specification, is intended to be the basis for this specification.
  • ...1 more annotation...
  • 1. IntroductionThis section is non-normative. Today's web is built predominantly for human consumption. Even as machine-readable data begins to permeate the web, it is typically distributed in a separate file, with a separate format, and very limited correspondence between the human and machine versions. As a result, web browsers can provide only minimal assistance to humans in parsing and processing web data: browsers only see presentation information. RDFa is intended to solve the problem of machine-readable data in HTML documents. RDFa provides a set of HTML attributes to augment visual data with machine-readable hints. Using RDFa, authors may turn their existing human-visible text and links into machine-readable data without repeating content.
Paul Merrell

Archive of W3C News in 2009 - 0 views

  • 2009-07-02: Today the Director announces that when the XHTML 2 Working Group charter expires as scheduled at the end of 2009, the charter will not be renewed. By doing so, and by increasing resources in the HTML Working Group, W3C hopes to accelerate the progress of HTML 5 and clarify W3C's position regarding the future of HTML. A FAQ answers questions about the future of deliverables of the XHTML 2 Working Group, and the status of various discussions related to HTML.
  • 2009-08-26: The HTML Working Group has published Working Drafts of HTML 5 and HTML 5 differences from HTML 4. In HTML 5, new features are introduced to help Web application authors, new elements are introduced based on research into prevailing authoring practices, and special attention has been given to defining clear conformance criteria for user agents in an effort to improve interoperability. "HTML 5 differences from HTML 4" describes the differences between HTML 4 and HTML 5 and provides some of the rationale for the changes. Learn more about HTML. (Permalink)
Gary Edwards

Growing pains afflict HTML5 standardization | Deep Tech - CNET News - 0 views

  •  
    The World Wide Web Consortium's return to HTML standardization after years of absence has produced tensions with the more informal Web Hypertext Application Working Group (WHATWG) that shouldered the HTML burden during that absence. Some examples of language that's cropped up this month on the W3C's HTML Working Group mailing list: "childish," "intolerable," "ridiculous," "shenanigans." And there's a concrete manifestation of the divisiveness: The WHATWG and W3C versions of the HTML5 specification, though both stemming from the same source material, have diverged in some areas. Some of the differences are relatively minor, and there are strong incentives to converge the two drafts of the HTML5 specification so that browser makers and Web developers aren't faced with the prospect of incompatibilities. In the meantime, though, the overseers of the Web are clashing during a time when their important new standard is just arriving in the spotlight.
Gary Edwards

Are the feds the first to a common cloud definition? | The Wisdom of Clouds - CNET News - 0 views

  •  
    Cisco's James Urquhart discusses the NIST definition of Cloud Computing. The National Institute of Technology and Standards is a non regulatory branch of the Commerce Department and is responsible for much of the USA's official participation in World Standards organizations. This is an important discussion, but i'm a bit disappointed by the loose use of the term "network". I guess they mean the Internet? No mention of RESTfull computing or Open Web Standards either. Some interesting clips: ...(The NIST's) definition of cloud computing will be the de facto standard definition that the entire US government will be given...In creating this definition, NIST consulted extensively with the private sector including a wide range of vendors, consultants and industry pundants including your truly. Below is the draft NIST working definition of Cloud Computing. I should note, this definition is a work in progress and therefore is open to public ratification & comment. The initial feedback was very positive from the federal CIO's who were presented it yesterday in DC. Baring any last minute lobbying I doubt we'll see many more major revisions. ....... Cloud computing is a pay-per-use model for enabling available, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is comprised of five key characteristics, three delivery models, and four deployment models.
  •  
    Gary, NIST really is not "responsible for much of the USA's official participation in World Standards organizations." Lots of legal analysis omitted, but the bottom line is that NIST would have had to be delegated that responsibility by the President, but never was. However, that did not stop NIST from signing over virtually all responsibility for U.S. participation in international standard development to the private ANSI, without so much as a public notice and comment rulemaking process. See section 3 at http://ts.nist.gov/Standards/Conformity/ansimou.cfm. Absolutely illegal, including at least two bright-line violations of the U.S. Constitution. But the Feds have unmistakably abdicated their legal responsibilities in regard to international standards to the private sector.
Paul Merrell

RDFa API - 0 views

  • RDFa APIAn API for extracting structured data from Web documentsW3C Working Draft 08 June 2010
  • RDFa [RDFA-CORE] enables authors to publish structured information that is both human- and machine-readable. Concepts that have traditionally been difficult for machines to detect, like people, places, events, music, movies, and recipes, are now easily marked up in Web documents. While publishing this data is vital to the growth of Linked Data, using the information to improve the collective utility of the Web for humankind is the true goal. To accomplish this goal, it must be simple for Web developers to extract and utilize structured information from a Web document. This document details such a mechanism; an RDFa Document Object Model Application Programming Interface (RDFa DOM API) that allows simple extraction and usage of structured information from a Web document.
  • This document is a detailed specification for an RDFa DOM API. The document is primarily intended for the following audiences: User Agent developers that are providing a mechanism to programatically extract RDF Triples from RDFa in a host language such as XHTML+RDFa [XHTML-RDFA], HTML+RDFa [HTML-RDFA] or SVG Tiny 1.2 [SVGTINY12], DOM tool developers that want to provide a mechanism for extracting RDFa content via programming languages such as JavaScript, Python, Ruby, or Perl, and Developers that want to understand the inner workings and design criteria for the RDFa DOM API.
Paul Merrell

Last Call Working Draft -- W3C Authoring Tool Accessibility Guidelines (ATAG) 2.0 - 0 views

  • This is a Working Draft of the Authoring Tool Accessibility Guidelines (ATAG) version 2.0. This document includes recommendations for assisting authoring tool developers to make the authoring tools that they develop more accessible to people with disabilities, including blindness and low vision, deafness and hearing loss, learning disabilities, cognitive limitations, motor difficulties, speech difficulties, and others. Accessibility, from an authoring tool perspective, includes addressing the needs of two (potentially overlapping) user groups with disabilities: authors of web content, whose needs are met by ensuring that the authoring tool user interface itself is accessible (addressed by Part A of the guidelines), and end users of web content, whose needs are met by ensuring that all authors are enabled, supported, and guided towards producing accessible web content (addressed by Part B of the guidelines).
  • Examples of authoring tools: ATAG 2.0 applies to a wide variety of web content generating applications, including, but not limited to: web page authoring tools (e.g., WYSIWYG HTML editors) software for directly editing source code (see note below) software for converting to web content technologies (e.g., "Save as HTML" features in office suites) integrated development environments (e.g., for web application development) software that generates web content on the basis of templates, scripts, command-line input or "wizard"-type processes software for rapidly updating portions of web pages (e.g., blogging, wikis, online forums) software for generating/managing entire web sites (e.g., content management systems, courseware tools, content aggregators) email clients that send messages in web content technologies multimedia authoring tools debugging tools for web content software for creating mobile web applications
  • Web-based and non-web-based: ATAG 2.0 applies equally to authoring tools of web content that are web-based, non-web-based or a combination (e.g., a non-web-based markup editor with a web-based help system, a web-based content management system with a non-web-based file uploader client). Real-time publishing: ATAG 2.0 applies to authoring tools with workflows that involve real-time publishing of web content (e.g., some collaborative tools). For these authoring tools, conformance to Part B of ATAG 2.0 may involve some combination of real-time accessibility supports and additional accessibility supports available after the real-time authoring session (e.g., the ability to add captions for audio that was initially published in real-time). For more information, see the Implementing ATAG 2.0 - Appendix E: Real-time content production. Text Editors: ATAG 2.0 is not intended to apply to simple text editors that can be used to edit source content, but that include no support for the production of any particular web content technology. In contrast, ATAG 2.0 can apply to more sophisticated source content editors that support the production of specific web content technologies (e.g., with syntax checking, markup prediction, etc.).
  •  
    Link is the latest version link so page should update when this specification graduates to a W3C recommendation.
Gary Edwards

Cisco "Thinking About" Going Up Against Microsoft Office and Google Apps - 0 views

  •  
    Knock me over with a feather. Now comes news that Cisco wants to challenge Microsoft Office and Google Apps. Paul Smalera of Business Insider questions the wisdom of this initiative, insisting that Cisco must know it can't beat either MSOffice or Google Apps. Maybe Cisco is fishing for help? Where is that wave-maker application Jason and Florian are said to be working on? :) Excerpt: Cisco VP Doug Dennerline told reporters, the company is "thinking about" adding document drafting and sharing to WebEx, which already features instant messaging, online meeting and email services.
Paul Merrell

Cover Pages: XML Daily Newslink: Friday, 12 November 2010 - 0 views

  • HTTP Framework for Time-Based Access to Resource States: Memento Herbert Van de Sompel, Michael Nelson, Robert Sanderson; IETF I-D Representatives of Los Alamos National Laboratory and Old Dominion University have published a first IETF Working Draft of HTTP Framework for Time-Based Access to Resource States: Memento. According to the editor's iMinds blog: "While the days of human time travel as described in many a science fiction novel are yet to come, time travel on the Web has recently become a reality thanks to the Memento project. In essence, Memento adds a time dimension to the Web: enter the Web address of a resource in your browser and set a time slider to a desired moment in the Web's past, and see what the resource looked like around that time... Technically, Memento achieves this by: (a) Leveraging systems that host archival Web content, including Web archives, content management systems, and software versioning systems; (b) Extending the Web's most commonly used protocol (HTTP) with the capability to specify a datetime in protocol requests, and by applying an existing HTTP capability (content negotiation) in a new dimension: 'time'. The result is a Web in which navigating the past is as seamless as navigating the present... The Memento concepts have attracted significant international attention since they were first published in November 2009, and compliant tools are already emerging. For example, at the client side there is the MementoFox add-on for FireFox, and a Memento app for Android; at the server side, there is a plug-in for MediaWiki servers, and the Wayback software that is widely used by Web archives, worldwide, was recently enhanced with Memento support..."
Paul Merrell

YouTube To Censor "Controversial" Content, ADL On Board As Flagger - 0 views

  • Chief among the groups seeking to clamp down on independent media has been Google, the massive technology company with deep connections to the U.S. intelligence community, as well as to U.S. government and business elites.
  • Since 2015, Google has worked to become the Internet’s “Ministry of Truth,” first through its creation of the First Draft Coalition and more recently via major changes made to its search engine that curtail public access to new sites independent of the corporate media.
  • Google has now stepped up its war on free speech and the freedom of the press through its popular subsidiary, YouTube. On Tuesday, YouTube announced online that it is set to begin censoring content deemed “controversial,” even if that content does not break any laws or violate YouTube’s user agreement. Misleadingly dubbed as an effort “to fight terror content online,” the new program will flag content for review through a mix of machine algorithms and “human review,” guided by standards set up by “expert NGOs and institutions” that are part of YouTube’s “Trusted Flagger” program. YouTube stated that such organizations “bring expert knowledge of complex issues like hate speech, radicalization, and terrorism.” One of the leading institutions directing the course of the Trusted Flagger program is the Anti-Defamation League (ADL). The ADL was initially founded to “stop the defamation of the Jewish people and to secure justice and fair treatment to all” but has gained a reputation over the years for labeling any critic of Israel’s government as an “anti-Semite.” For instance, characterizing Israeli policies towards the Palestinians as “racist” or “apartheid-like” is considered “hate speech” by the ADL, as is accusing Israel of war crimes or attempted ethnic cleansing. The ADL has even described explicitly Jewish organizations who are critical of Israel’s government as being “anti-Semitic.”
Paul Merrell

WG Review: Internet Wideband Audio Codec (codec) - 0 views

  •  
    A new IETF working group has been proposed in the Real-time Applications and Infrastructure Area. The IESG has not made any determination as yet. The following draft charter was submitted, and is provided for informational purposes only. Please send your comments to the IESG mailing list (iesg at ietf.org) by January 20, 2010. ... According to reports from developers of Internet audio applications and operators of Internet audio services, there are no standardized, high-quality audio codecs that meet all of the following three conditions: 1. Are optimized for use in interactive Internet applications. 2. Are published by a recognized standards development organization (SDO) and therefore subject to clear change control. 3. Can be widely implemented and easily distributed among application developers, service operators, and end users. ... The goal of this working group is to develop a single high-quality audio codec that is optimized for use over the Internet and that can be widely implemented and easily distributed among application developers, service operators, and end users. Core technical considerations include, but are not necessarily limited to, the following: 1. Designing for use in interactive applications (examples include, but are not limited to, point-to-point voice calls, multi-party voice conferencing, telepresence, teleoperation, in-game voice chat, and live music performance) 2. Addressing the real transport conditions of the Internet as identified and prioritized by the working group 3. Ensuring interoperability with the Real-time Transport Protocol (RTP), including secure transport via SRTP 4. Ensuring interoperability with Internet signaling technologies such as Session Initiation Protocol (SIP), Session Description Protocol (SDP), and Extensible Messaging and Presence Protocol (XMPP); however, the result should not depend on the details of any particular signaling technology.
Paul Merrell

The HTML 5 Layout Elements Rundown - 0 views

  • TML 5 is an interesting beastie. The specification was not planned; The W3C was committed to HTML 4.1 as the last word in HTML. As such, most of the requests for HTML 5 came from the HTML user community itself, largely through the advent of the Web Hypertext Application Technology Working Group (WHATWG). The push from WHATWG was strong enough to prompt the formation of a HTML 5 working group a couple of years ago. Since then, the HTML 5 working group has slowly gone through the process of taking a somewhat hand-waving specification and recasting it in W3C terms, along with all the politics that the process entails. On April 23, 2009, the HTML 5 group released the most recent draft of the specification. Overall, it represents a considerable simplification from the previous release, especially as a number of initially proposed changes to the specification have been scaled back. The group defined roles for the proposed changes elsewhere. HTML 5 is a broad specification, and consequently, dozens of distinct changes—more than a single article can reasonably cover in any detail—occurred between HTML 4 and 5. This article focuses on the HTML 5 layout elements. Subsequent articles will examine forms-related changes (which are substantial), the new media elements, and DOM-related changes.
Paul Merrell

Leaked: ITU's secret Internet surveillance standard discussion draft - Boing Boing - 0 views

  • Yesterday morning, I wrote about the closed-door International Telecommunications Union meeting where they were working on standardizing "deep packet inspection" -- a technology crucial to mass Internet surveillance. Other standards bodies have refused to touch DPI because of the risk to Internet users that arises from making it easier to spy on them. But not the ITU. The ITU standardization effort has been conducted in secret, without public scrutiny. Now, Asher Wolf writes,
  • I publicly asked (via Twitter) if anyone could give me access to documents relating to the ITU's DPI recommendations, now endorsed by the U.N. The ITU's senior communications officer, Toby Johnson, emailed me a copy of their unpublished policy recommendations. OOOPS! 5 hours later, they emailed, asking me not to publish it, in part or in whole, and that it was for my eyes only. Please publish it (credit me for sending it to you.) Also note: 1. The recommendations *NEVER* discuss the impact of DPI.
  • 2. A FEW EXAMPLES OF POTENTIAL DPI USE CITED BY THE ITU: "I.9.2 DPI engine use case: Simple fixed string matching for BitTorrent" "II.3.4 Example “Forwarding copy right protected audio content”" "II.3.6 Example “Detection of a specific transferred file from a particular user”" "II.4.2 Example “Security check – Block SIP messages (across entire SIP traffic) with specific content types”" "II.4.5 Example “Identify particular host by evaluating all RTCP SDES packets”" "II.4.6 Example “Measure Spanish Jabber traffic”" "II.4.7 Example “Blocking of dedicated games”" "II.4.11 Example “Identify uploading BitTorrent users”" "II.4.13 Example “Blocking Peer-to-Peer VoIP telephony with proprietary end-to-end application control protocols”" "II.5.1 Example “Detecting a specific Peer-to-Peer VoIP telephony with proprietary end-to-end application control protocols”"
Paul Merrell

Thinking XML: The XML flavor of HTML5 - 1 views

  • 6 recommendations for developers using the next generation of the web's native language
  • In this article, I shall provide a practical guide that illustrates the state of play when it comes to XML in the HTML5 world. The article is written for what I call the desperate web hacker: someone who is not a W3C standards guru, but interested in either generating XHTML5 on the web, or consuming it in a simple way (that is, to consume information, rather than worrying about the enormous complexity of rendering). I'll admit that some of my recommendations will be painful for me to make, as a long-time advocate for processing XML the right way. Remember that HTML5 is still a W3C working draft, and it might be a while before it becomes a full recommendation. Many of its features are stable, though, and already well-implemented on the web.
1 - 17 of 17
Showing 20 items per page