Skip to main content

Home/ Open Web/ Group items tagged OpenWeb-Standards

Rss Feed Group items tagged

Gary Edwards

Does It Matter Who Wins the Browser Wars? Only if you care about the Future of the Open... - 1 views

  •  
    The Future of the Open Web You're right that the browser wars do not matter - except for this point of demarcation; browsers that support HTML+ and browser that support 1998 HTML. extensive comment by ~ge~ Not all Web services and applications support HTML+, the rapidly advancing set of technologies that includes HTML5, CSS3, SVG/Canvas, and JavaScript (including the libraries and JSON). Microsoft has chosen to draw the Open Web line at what amounts to 1998-2001 level of HTML/CSS. Above that line, they provision a rich-client / rich-server Web model bound to the .NET-WPF platform where C#, Silverlight, and XAML are very prominent. Noticeably, Open Web standards are for the most part replaced at this richer MSWeb level by proprietary technologies. Through limited support for HTML/CSS, IE8 itself acts to dumb down the Open Web. The effect of this is that business systems and day-to-day workflow processes bound to the ubiquitous and very "rich" MSOffice Productivity Environment have little choice when it comes to transitioning to the Web but to stay on the Microsoft 2010 treadmill. Sure, at some point legacy business processes and systems will be rewritten to the Web. The question is, will it be the Open Web or the MS-Web? The Open Web standards are the dividing line between owning your information and content, or, having that content bound to a Web platform comprised of proprietary Microsoft services, systems and applications. Web designers and developers are still caught up in the browser wars. They worry incessantly as to how to dumb down Web content and services to meet the limited functionality of IE. This sucks. So everyone continues to watch "the browser wars" stats. What they are really watching for though is that magic moment where "combined" HTML+ browser uptake in marketshare signals that they can start to implement highly graphical and collaboratively interactive HTML+ specific content. Meanwhile, the greater Web is a
Gary Edwards

What to expect from HTML 5 | Developer World - InfoWorld - 0 views

  •  
    Neil McAllister provides a good intro to HTML5 and what it will mean to the future of the Web.  It's just an intro, but the links he provides are excellent resources for deep dive. excerpt:  "Among Web developers, anticipation is mounting for HTML 5, the overhaul of the Web markup language currently under way at the Worldwide Web Consortium (W3C). For many, the revamping is long overdue. HTML hasn't had a proper upgrade in more than a decade. In fact, the last markup language to win W3C Recommendation status -- the final stage of the Web standards process -- was XHTML 1.1 in 2001. In the intervening years, Web developers have grown increasingly restless. Many claim the HTML and XHTML standards have become outdated, and that their document-centric focus does not adequately address the needs of modern Web applications. HTML 5 aims to change all that. When it is finalized, the new standard will include tags and APIs for improved interactivity, multimedia, and localization. As experimental support for HTML 5 features has crept into the current crop of Web browsers, some developers have even begun voicing hope that this new, modernized HTML will free them from reliance on proprietary plug-ins such as Flash, QuickTime, and Silverlight."
Gary Edwards

Are the feds the first to a common cloud definition? | The Wisdom of Clouds - CNET News - 0 views

  •  
    Cisco's James Urquhart discusses the NIST definition of Cloud Computing. The National Institute of Technology and Standards is a non regulatory branch of the Commerce Department and is responsible for much of the USA's official participation in World Standards organizations. This is an important discussion, but i'm a bit disappointed by the loose use of the term "network". I guess they mean the Internet? No mention of RESTfull computing or Open Web Standards either. Some interesting clips: ...(The NIST's) definition of cloud computing will be the de facto standard definition that the entire US government will be given...In creating this definition, NIST consulted extensively with the private sector including a wide range of vendors, consultants and industry pundants including your truly. Below is the draft NIST working definition of Cloud Computing. I should note, this definition is a work in progress and therefore is open to public ratification & comment. The initial feedback was very positive from the federal CIO's who were presented it yesterday in DC. Baring any last minute lobbying I doubt we'll see many more major revisions. ....... Cloud computing is a pay-per-use model for enabling available, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is comprised of five key characteristics, three delivery models, and four deployment models.
  •  
    Gary, NIST really is not "responsible for much of the USA's official participation in World Standards organizations." Lots of legal analysis omitted, but the bottom line is that NIST would have had to be delegated that responsibility by the President, but never was. However, that did not stop NIST from signing over virtually all responsibility for U.S. participation in international standard development to the private ANSI, without so much as a public notice and comment rulemaking process. See section 3 at http://ts.nist.gov/Standards/Conformity/ansimou.cfm. Absolutely illegal, including at least two bright-line violations of the U.S. Constitution. But the Feds have unmistakably abdicated their legal responsibilities in regard to international standards to the private sector.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Gary Edwards

In Mobile, Fragmentation is Forever. Deal With It. - washingtonpost.com - 0 views

  •  
    I disagree with the authors conclusions here.  He misses some very significant developments.  Particularly around Google, WebKit, and WebKit-HTML5. For instance, there is this article out today; "Google Really is Giving Away Free Nexus One and Droid Handsets to Developers".  Also, Palm is working on a WiMAX/WiFi version of their WebOS (WebKit) smartphone for Sprint.  Sprint and ClearWire are pushing forward with a very aggressive WiMAX rollout in the USA.  San Francisco should go on line this year!   One of the more interesting things about the Sprint WiMAX plan is that they have a set fee of $69.00 per month that covers EVERYTHING; cellphone, WiMAX Web browsing, video, and data connectivity, texting (SMS) and VOIP.  Major Sprint competitors, Verizon, AT&T and TMobile charge $69 per month, but it only covers cellphone access.  Everything else is extra adn also at low speed/ low bandwidth.  3G at best.  WiMAX however is a 4G screamer.  It's also an open standard.  (Verizon FIOS and LTE are comparable and said to be coming soon, but they are proprietary technologies).   The Cable guys are itneresting in that they are major backers of WiMAX, but also have a bandwidth explosive technology called Docsis. There is an interesting article at TechCrunch, "In Mobile, Fragmentation is Forever. Deal With It."  I disagree entirely with the authors conclusion.  WebKit is capable of providing a universal HTML5 application developers layer for mobile and desktop browser computing.  It's supported by Apple, Google, Palm (WebOS), Nokia, RiMM (Blackberry) and others to such an extent that 85% of all smartphones shipped this year will either ship with WebKit or, an Opera browser compatible with the WebKit HTML5 document layout/rendering model.   I would even go as far as to say that WebKit-HTML5 owns the Web's document model and application layer for the future.  Excepting for Silverlight, which features the OOXML document model with over 500 million desktop develop
Gary Edwards

Google Chrome OS: Web Platform To Rule Them All -- InformationWeek - 0 views

  •  
    Some good commentary on chrome OS from InformationWeek's Thomas Claburn. Excerpt: With Chrome OS, Google aims to make the Web the primary platform for software development....... The fact that Chrome OS applications will be written using open Web standards like JavaScript, HTML, and CSS might seem like a liability because Web applications still aren't as capable as applications written for specific devices and operating systems. But Google is betting that will change and is working to effect the change on which its bet depends. Within a year or two, Web browsers will gain access to peripherals, through an infrastructure layer above the level of device drivers. Google's work with standards bodies is making that happen..... ..... According to Matt Womer, the "ubiquitous Web activity lead" for W3C, the Web standards consortium, Web protocol groups are working to codify ways to access peripherals like digital cameras, the messaging stack, calendar data, and contact data. There's now a JavaScript API that Web developers can use to get GPS information from mobile phones using the phone's browser, he points out. What that means is that device drivers for Chrome OS will emerge as HTML 5 and related standards mature. Without these, consumers would never use Chrome OS because devices like digital cameras wouldn't be able to transfer data. Womer said the standardization work could move quite quickly, but won't be done until there's an actual implementation. That would be Chrome OS...... ..... Chrome OS will sell itself to developers because, as Google puts it, writing applications for the Web gives "developers the largest user base of any platform."
Paul Merrell

Leaked: ITU's secret Internet surveillance standard discussion draft - Boing Boing - 0 views

  • Yesterday morning, I wrote about the closed-door International Telecommunications Union meeting where they were working on standardizing "deep packet inspection" -- a technology crucial to mass Internet surveillance. Other standards bodies have refused to touch DPI because of the risk to Internet users that arises from making it easier to spy on them. But not the ITU. The ITU standardization effort has been conducted in secret, without public scrutiny. Now, Asher Wolf writes,
  • I publicly asked (via Twitter) if anyone could give me access to documents relating to the ITU's DPI recommendations, now endorsed by the U.N. The ITU's senior communications officer, Toby Johnson, emailed me a copy of their unpublished policy recommendations. OOOPS! 5 hours later, they emailed, asking me not to publish it, in part or in whole, and that it was for my eyes only. Please publish it (credit me for sending it to you.) Also note: 1. The recommendations *NEVER* discuss the impact of DPI.
  • 2. A FEW EXAMPLES OF POTENTIAL DPI USE CITED BY THE ITU: "I.9.2 DPI engine use case: Simple fixed string matching for BitTorrent" "II.3.4 Example “Forwarding copy right protected audio content”" "II.3.6 Example “Detection of a specific transferred file from a particular user”" "II.4.2 Example “Security check – Block SIP messages (across entire SIP traffic) with specific content types”" "II.4.5 Example “Identify particular host by evaluating all RTCP SDES packets”" "II.4.6 Example “Measure Spanish Jabber traffic”" "II.4.7 Example “Blocking of dedicated games”" "II.4.11 Example “Identify uploading BitTorrent users”" "II.4.13 Example “Blocking Peer-to-Peer VoIP telephony with proprietary end-to-end application control protocols”" "II.5.1 Example “Detecting a specific Peer-to-Peer VoIP telephony with proprietary end-to-end application control protocols”"
Gary Edwards

Mobile Cloud Computing: $9.5 Billion by 2014 - 0 views

  •  
    I left a lengthy comment on this very good article. excerpt: According to the latest study from Juniper Research, the market for cloud-based mobile applications will grow 88% from 2009 to 2014. The market was just over $400 million this past year, says Juniper, but by 2014 it will reach $9.5 billion. Driving this growth will be the adoption of the new web standard HTML5, increased mobile broadband coverage and the need for always-on collaborative services for the enterprise. Cloud Apps in your Pocket Mobile cloud computing is a term that refers to an infrastructure where both the data storage and the data processing happen outside of the mobile device from which an application is launched. To the typical consumer, a cloud-based mobile application looks and feels just like any app purchased or downloaded from a mobile application store like iTunes. However, the app is driven from the "cloud," not from the handheld device itself. There are already a few well-known mobile cloud apps out there including Google's Gmail and Google Voice for iPhone. When launched via iPhone homescreen shortcuts, these apps perform just like any other app on the iPhone, but all of their processing power comes from the cloud. In the future, there will be even more applications like these available, but they won't necessarily be mobilized web sites like those in Google's line-up. Cloud-based mobile apps are perfectly capable of being packaged in a way that allows them to be sold alongside traditional mobile apps in mobile application stores, with no one but the developers any wiser. HTML5 Paves the Way for Mobile Web's Future
Gary Edwards

Five reasons why Microsoft can't compete (and Steve Ballmer isn't one of them) - 2 views

  • discontinued
  • 1. U.S. and European antitrust cases put lawyers and non-technologists in charge of important final product decisions.
  • The company long resisted releasing pertinent interoperability information in the United States. On the European Continent, this resistance led to huge fines. Meanwhile, Microsoft steered away from exclusive contracts and from pushing into adjacent markets.
  • ...11 more annotations...
  • Additionally, Microsoft curtailed development of the so-called middleware at the core of the U.S. case: E-mail, instant messaging, media playback and Web browsing:
  • Microsoft cofounder Bill Gates learned several important lessons from IBM. Among them: The value of controlling key technology endpoints. For IBM, it was control interfaces. For Microsoft: Computing standards and file formats
  • 2. Microsoft lost control of file formats.
  • Charles Simonyi, the father of Microsoft, and his team achieved two important goals by the mid 1990s: Established format standards that resolved problems sharing documents created by disparate products.
  • nsured that Microsoft file formats would become the adopted desktop productivity standards. Format lock-in helped drive Office sales throughout the late 1990s and early 2000s -- and Windows along with it. However, the Web emerged as a potent threat, which Gates warned about in his May 1995 "Internet Tidal Wave" memo. Gates specifically identified HTML, HTTP and TCP/IP as formats outside Microsoft's control. "Browsing the Web, you find almost no Microsoft file formats," Gates wrote. He observed not seeing a single Microsoft file format "after 10 hours of browsing," but plenty of Apple QuickTime videos and Adobe PDF documents. He warned that "the Internet is the most important single development to come along since the IBM PC was introduced in 1981. It is even more important than the arrival of the graphical user interface (GUI)."
  • 3. Microsoft's senior leadership is middle-aging.
  • Google resembles Microsoft in the 1980s and 1990s:
  • Microsoft's middle-management structure is too large.
  • 5. Microsoft's corporate culture is risk adverse.
  • Microsoft's
  • . Microsoft was nimbler during the transition from mainframe to PC dominance. IBM had built up massive corporate infrastructure, large customer base and revenue streams attached to both. With few customers, Microsoft had little to lose but much to gain; the upstart took risks IBM wouldn't for fear of losing customers or jeopardizing existing revenue streams. Microsoft's role is similar today. Two product lines, Office and Windows, account for the majority of Microsoft products, and the majority of sales are to enterprises -- the same kind of customers IBM had during the mainframe era.
  •  
    Excellent summary and historical discussion about Microsoft and why they can't seem to compete.  Lot's of anti trust and monopolist swtuff - including file formats and interop lock ins (end points).  Microsoft's problems started with the World Wide Web and continue with mobile devices connected to cloud services.
Paul Merrell

Mathematical Markup Language (MathML) Version 3.0 - 0 views

  • Mathematical Markup Language (MathML) Version 3.0 W3C Proposed Recommendation 10 August 2010
  • This specification defines the Mathematical Markup Language, or MathML. MathML is an XML application for describing mathematical notation and capturing both its structure and content. The goal of MathML is to enable mathematics to be served, received, and processed on the World Wide Web, just as HTML has enabled this functionality for text.
  •  
    MathML 3 achieves proposed recommendation status. For those unfamiliar with W3C lingo, this means that it is now a proposed standard. Concurrently, W3C published a proposed recommendation for a A MathML for CSS Profile, http://www.w3.org/TR/2010/PR-mathml-for-css-20100810/
Paul Merrell

Google pounds the open standards drum during I/O keynote - 0 views

  •  
    Separately, Microsoft and Apple have announced that both company's browsers will boycott VP8 in favor of H264, which is encumbered by more than a thousand patents.. But if VP8 becomes ubiquitous on the Web, that's a hard position to maintain.  
Gary Edwards

Google's Microsoft Fight Starts With Smartphones | BNET Technology Blog | BNET - 0 views

  •  
    .... "I recently described how Google's Wave, a collaboration tool based on the new HTML 5 standard, demonstrated the potential for Web applications to unglue Microsoft's hold on customers. My post quoted Gary Edwards, the former president of the Open Document Foundation, a first-hand witness to the failed attempt by Massachusetts to dump Microsoft and as experienced a hand at Microsoft-tilting as anyone I know......"
Gary Edwards

The Lowdown: Technology and Politics of HTML5 vs. Flash | Hidden Dimensions | The M... - 0 views

  •  
    Excellent but light weight and breezy review of the Flash-Silverlight-Open Web HTML5 battle for the future of the Web. excerpt: At the top of the org chart, Apple's deprecation of Flash technology is all about politics. Apple doesn't want its mainstream video delivery system controlled by a third party. So Mr. Jobs backs up his politics with tidbits of technical truths. However, discovering the real truths about HTML5 and Flash is a bit harder, as this survey shows. On February 11th, I wrote an editorial, "What Should Apple Do About Adobe?" Part of the discussion related to Adobe's Flash Player on the Mac, updates and security. Inevitably, the comments escalated to a discussion of Steve Job's distaste for and blocking of Flash on the iPhone and iPad. The question is: is Apple's stance against Flash justified? Of course, any political argument needs only the barest of idealogical arguments to sustain itself. More to the point is, can Apple fight this war and win based on the state-of-the-art with HTML5? Again, Apple's CEO must believe he can win this war. There has to be some technical basis for that, or the war wouldn't be waged.
Gary Edwards

How Did We Get Here? - Dive Into HTML5 with Mark Pilgrim - 1 views

  •  
    The history of HTML from it's earliest days to HTML5, by Mark Pilgrim.  Wonderful stuff, beautifully written.  Excellent introduction to the HTML5 category of Open Web technologies ( HTML5, CSS3, SVG, JavaScript and the Open WEB API's) excerpt quote: Implementations and specifications have to do a delicate dance together. You don't want implementations to happen before the specification is finished, because people start depending on the details of implementations and that constrains the specification. However, you also don't want the specification to be finished before there are implementations and author experience with those implementations, because you need the feedback. There is unavoidable tension here, but we just have to muddle on through.
Paul Merrell

Facebook Says It Is Deleting Accounts at the Direction of the U.S. and Israeli Governments - 0 views

  • In September of last year, we noted that Facebook representatives were meeting with the Israeli government to determine which Facebook accounts of Palestinians should be deleted on the ground that they constituted “incitement.” The meetings — called for and presided over by one of the most extremist and authoritarian Israeli officials, pro-settlement Justice Minister Ayelet Shaked — came after Israel threatened Facebook that its failure to voluntarily comply with Israeli deletion orders would result in the enactment of laws requiring Facebook to do so, upon pain of being severely fined or even blocked in the country. The predictable results of those meetings are now clear and well-documented. Ever since, Facebook has been on a censorship rampage against Palestinian activists who protest the decades-long, illegal Israeli occupation, all directed and determined by Israeli officials. Indeed, Israeli officials have been publicly boasting about how obedient Facebook is when it comes to Israeli censorship orders
  • Facebook now seems to be explicitly admitting that it also intends to follow the censorship orders of the U.S. government.
  • What this means is obvious: that the U.S. government — meaning, at the moment, the Trump administration — has the unilateral and unchecked power to force the removal of anyone it wants from Facebook and Instagram by simply including them on a sanctions list. Does anyone think this is a good outcome? Does anyone trust the Trump administration — or any other government — to compel social media platforms to delete and block anyone it wants to be silenced? As the ACLU’s Jennifer Granick told the Times: It’s not a law that appears to be written or designed to deal with the special situations where it’s lawful or appropriate to repress speech. … This sanctions law is being used to suppress speech with little consideration of the free expression values and the special risks of blocking speech, as opposed to blocking commerce or funds as the sanctions was designed to do. That’s really problematic.
  • ...3 more annotations...
  • As is always true of censorship, there is one, and only one, principle driving all of this: power. Facebook will submit to and obey the censorship demands of governments and officials who actually wield power over it, while ignoring those who do not. That’s why declared enemies of the U.S. and Israeli governments are vulnerable to censorship measures by Facebook, whereas U.S and Israeli officials (and their most tyrannical and repressive allies) are not
  • All of this illustrates that the same severe dangers from state censorship are raised at least as much by the pleas for Silicon Valley giants to more actively censor “bad speech.” Calls for state censorship may often be well-intentioned — a desire to protect marginalized groups from damaging “hate speech” — yet, predictably, they are far more often used against marginalized groups: to censor them rather than protect them. One need merely look at how hate speech laws are used in Europe, or on U.S. college campuses, to see that the censorship victims are often critics of European wars, or activists against Israeli occupation, or advocates for minority rights.
  • It’s hard to believe that anyone’s ideal view of the internet entails vesting power in the U.S. government, the Israeli government, and other world powers to decide who may be heard on it and who must be suppressed. But increasingly, in the name of pleading with internet companies to protect us, that’s exactly what is happening.
Paul Merrell

YouTube To Censor "Controversial" Content, ADL On Board As Flagger - 0 views

  • Chief among the groups seeking to clamp down on independent media has been Google, the massive technology company with deep connections to the U.S. intelligence community, as well as to U.S. government and business elites.
  • Since 2015, Google has worked to become the Internet’s “Ministry of Truth,” first through its creation of the First Draft Coalition and more recently via major changes made to its search engine that curtail public access to new sites independent of the corporate media.
  • Google has now stepped up its war on free speech and the freedom of the press through its popular subsidiary, YouTube. On Tuesday, YouTube announced online that it is set to begin censoring content deemed “controversial,” even if that content does not break any laws or violate YouTube’s user agreement. Misleadingly dubbed as an effort “to fight terror content online,” the new program will flag content for review through a mix of machine algorithms and “human review,” guided by standards set up by “expert NGOs and institutions” that are part of YouTube’s “Trusted Flagger” program. YouTube stated that such organizations “bring expert knowledge of complex issues like hate speech, radicalization, and terrorism.” One of the leading institutions directing the course of the Trusted Flagger program is the Anti-Defamation League (ADL). The ADL was initially founded to “stop the defamation of the Jewish people and to secure justice and fair treatment to all” but has gained a reputation over the years for labeling any critic of Israel’s government as an “anti-Semite.” For instance, characterizing Israeli policies towards the Palestinians as “racist” or “apartheid-like” is considered “hate speech” by the ADL, as is accusing Israel of war crimes or attempted ethnic cleansing. The ADL has even described explicitly Jewish organizations who are critical of Israel’s government as being “anti-Semitic.”
1 - 16 of 16
Showing 20 items per page