Skip to main content

Home/ Future of the Web/ Group items tagged web-standards

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gary Edwards

The Open Web: Next-Generation Standards Support in WebKit/ Safari - 0 views

  •  
    Apple has posted an interesting page describing Safari technologies. Innovations and support for existing standards as well as the ACID3 test are covered.

    Many people think that the Apple WebKit-Safari-iPhone innovations are pushing Open Web Standards beyond beyond the limits of "Open", and deep into the verboten realm of vendor specific extensions. Others, myself included, believe that the WebKit community has to do this if Open Web technologies are to be anyway competitive with Microsoft's RiA (XAML-Silverlight-WPF).

    Adobe RiA (AiR-Flex-Flash) is also an alternative to WebKit and Microsoft RiA; kind of half Open Web, half proprietary though. Adobe Flash is of course proprietary. While Adobe AiR implements the WebKit layout engine and visual document model. I suspect that as Adobe RiA loses ground to Microsoft Silverlight, they will open up Flash. But that's not something the Open Web can afford to wait for.

    In many ways, WebKit is at the cutting edge of Ajax Open Web technologies. The problems of Ajax not scaling well are being solved as shared JavaScript libraries continue to amaze, and the JavaScript engines roar with horsepower. Innovations in WebKit, even the vendor-device specific ones, are being picked up by the JS Libraries, Firefox, and the other Open Web browsers.

    At the end of the day though, it is the balance between the ACiD3 test on one side and the incredible market surge of WebKit smartphones, countertops, and netbook devices at the edge of the Web that seem to hold things together.

    The surge at the edge is washing back over the greater Web, as cross-browser frustrated Web designers and developers roll out the iPhone welcome. Let's hope the ACiD3 test holds. So far it's proving to be a far more important consideration for maintaining Open Web interop, without sacrificing innovation, than anything going on at the stalled W3C.

    "..... Safari continues to lead the way, implementing
Gary Edwards

Brendan's Roadmap Updates: Open letter to Microsoft's Chris Wilson and their fight to s... - 0 views

  • The history of ECMAScript since its beginnings in November 1996 shows that when Microsoft was behind in the market (against Netscape in 1996-1997), it moved aggressively in the standards body to evolve standards starting with ES1 through ES3. Once Microsoft dominated the market, the last edition of the standard was left to rot -- ES3 was finished in 1999 -- and even easy-to-fix standards conformance bugs in IE JScript went unfixed for eight years (so three years to go from Edition 1 to 3, then over eight to approach Edition 4). Now that the proposed 4th edition looks like a competitive threat, the world suddenly hears in detail about all those bugs, spun as differences afflicting "JavaScript" that should inform a new standard.
  • In my opinion the notion that we need to add features so that ajax programming would be easier is plain wrong. ajax is a hack and also the notion of a webapp is a hack. the web was created in a document centric view. All w3c standards are also based on the same document notion. The heart of the web, the HTTP protocol is designed to support a web of documents and as such is stateless. the proper solution, IMO, is not to evolve ES for the benefit of ajax and webapps, but rather generalize the notion of a document browser that connects to a web of documents to a general purpose client engine that connects to a network of internet applications. thus the current web (document) browser just becomes one such internet application.
  •  
    the obvious conflict of interest between the standards-based web and proprietary platforms advanced by Microsoft, and the rationales for keeping the web's client-side programming language small while the proprietary platforms rapidly evolve support for large languages, does not help maintain the fiction that only clashing high-level philosophies are involved here. Readers may not know that Ecma has no provision for "minor releases" of its standards, so any ES3.1 that was approved by TG1 would inevitably be given a whole edition number, presumably becoming the 4th Edition of ECMAScript. This is obviously contentious given all the years that the majority of TG1, sometimes even apparently including Microsoft representatives, has worked on ES4, and the developer expectations set by this long-standing effort. A history of Microsoft's post-ES3 involvement in the ECMAScript standard group, leading up to the overt split in TG1 in March, is summarized here. The history of ECMAScript since its beginnings in November 1996 shows that when Microsoft was behind in the market (against Netscape in 1996-1997), it moved aggressively in the standards body to evolve standards starting with ES1 through ES3. Once Microsoft dominated the market, the last edition of the standard was left to rot -- ES3 was finished in 1999 -- and even easy-to-fix standards conformance bugs in IE JScript went unfixed for eight years (so three years to go from Edition 1 to 3, then over eight to approach Edition 4). Now that the proposed 4th edition looks like a competitive threat, the world suddenly hears in detail about all those bugs, spun as differences afflicting "JavaScript" that should inform a new standard.
Gary Edwards

ES4 and the fight for the future of the Open Web - By Haavard - 0 views

  • Here, we have no better theory to explain why Microsoft is enthusiastic to spread C# onto the web via Silverlight, but not to give C# a run for its money in the open web standards by supporting ES4 in IE.The fact is, and we've heard this over late night truth-telling meetings between Mozilla principals and friends at Microsoft, that Microsoft does not think the web needs to change much. Or as one insider said to a Mozilla figure earlier this year: "we could improve the web standards, but what's in it for us?"
  •  
    Microsoft opposes the stunning collection of EcmaScript standards improvements to JavaScript ES3 known as "ES4". Brendan Eich, author of JavaScript and lead Mozilla developer claims that Microsoft is stalling the advance of JavaScript to protect their proprietary advantages with Silverlight - WPF technologies. Opera developer "Haavard" asks the question, "Why would Microsoft do this?" Brendan Eich explains: Indeed Microsoft does not desire serious change to ES3, and we heard this inside TG1 in April. The words were (from my notes) more like this: "Microsoft does not think the web needs to change much". Except, of course, via Silverlight and WPF, which if not matched by evolution of the open web standards, will spread far and wide on the Web, as Flash already has. And that change to the Web is apparently just fine and dandy according to Microsoft. First, Microsoft does not think the Web needs to change much, but then they give us Silverlight and WPF? An amazing contradiction if I ever saw one. It is obvious that Microsoft wants to lock the Web to their proprietary technologies again. They want Silverlight, not some new open standard which further threatens their locked-in position. They will use dirty tricks - lies and deception - to convince people that they are in the right. Excellent discussion on how Microsoft participates in open standards groups to delay, stall and dumb down the Open Web formats, protocols and interfaces their competitors use. With their applications and services, Microsoft offers users a Hobbsian choice; use the stalled, limited and dumbed down Open Web standards, or, use rich, fully featured and advanced but proprietary Silverlight-WPF technologies. Some choice.
Gary Edwards

ptsefton » OpenOffice.org is bad for the planet - 0 views

  •  
    ptsefton continues his rant that OpenOffice does not support the Open Web. He's been on this rant for so long, i'm wondering if he really thinks there's a chance the lords of ODF and the OpenOffice source code are listening? In this post he describes how useless it is to submit his findings and frustrations with OOo in a bug report. Pretty funny stuff even if you do end up joining the Michael Meeks trek along this trail of tears. Maybe there's another way?

    What would happen if pt moved from targeting the not so open OpenOffice, to target governments and enterprises trying to set future information system requirements?

    NY State is next up on this endless list. Most likely they will follow the lessons of exhaustive pilot studies conducted by Massachusetts, California, Belgium, Denmark and England, and end up mandating the use of both open standard "XML" formats, ODF and OOXML.

    The pilots concluded that there was a need for both XML formats; depending on the needs of different departments and workgroups. The pilot studies scream out a general rule of thumb; if your department has day-to-day business processes bound to MSOffice workgroups, then it makes sense to use MSOffice OOXML going forward. If there is no legacy MSOffice bound workgroup or workflow, it makes sense to move to OpenOffice ODF.

    One thing the pilots make clear is that it is prohibitively costly and disruptive to try to replace MSOffice bound workgroups.

    What NY State might consider is that the Web is going to be an important part of their informations systems future. What a surprise. Every pilot recognized and indeed, emphasized this fact. Yet, they fell short of the obvious conclusion; mandating that desktop applications provide native support for Open Web formats, protocols and interfaces!

    What's wrong with insisting that desktop applciations and office suites support the rapidly advancing HTML+ technologies as well as the applicat
Gonzalo San Gil, PhD.

The open web's guardians are acting like it's already dead / Boing Boing - 0 views

  •  
    "The World Wide Web Consortium -- an influential standards body devoted to the open web -- used to make standards that would let anyone make a browser that could view the whole Web; now they're making standards that let the giant browser companies and giant entertainment companies decide which browsers will and won't work on the Web of the future. "
  •  
    "The World Wide Web Consortium -- an influential standards body devoted to the open web -- used to make standards that would let anyone make a browser that could view the whole Web; now they're making standards that let the giant browser companies and giant entertainment companies decide which browsers will and won't work on the Web of the future. "
Gary Edwards

Developer: Dump JavaScript for faster Web loading | CIO - 0 views

  • Accomplishing the goal of a high-speed, responsive Web experience without loading JavaScript "could probably be done by linking anchor elements to JSON/XML (or a new definition) API endpoints [and] having the browser internally load the data into a new data structure," the proposal states.
  • The browser "then replaces DOM elements with whatever data that was loaded as needed.
  • The initial data and standard error responses could be in header fixtures, which could be replaced later if so desired. "The HTML body thus becomes a templating language with all the content residing in the fixtures that can be dynamically reloaded without JavaScript."
  •  
    "A W3C (World Wide Web Consortium) mailing list post entitled "HTML6 proposal for single-page Web apps without JavaScript" details the proposal, dated March 20. "The overall purpose [of the plan] is to reduce response times when loading Web pages," said Web developer Bobby Mozumder, editor in chief of FutureClaw magazine, in an email. "This is the difference between a 300ms page load vs 10ms. The faster you are, the better people are going to feel about using your Website." The proposal cites a standard design pattern emerging via front-end JavaScript frameworks where content is loaded dynamically via JSON APIs. "This is the single-page app Web design pattern," said Mozumder. "Everyone's into it because the responsiveness is so much better than loading a full page -- 10-50ms with a clean API load vs. 300-1500ms for a full HTML page load. Since this is so common now, can we implement this directly in the browsers via HTML so users can dynamically run single-page apps without JavaScript?" Accomplishing the goal of a high-speed, responsive Web experience without loading JavaScript "could probably be done by linking anchor elements to JSON/XML (or a new definition) API endpoints [and] having the browser internally load the data into a new data structure," the proposal states. The browser "then replaces DOM elements with whatever data that was loaded as needed." The initial data and standard error responses could be in header fixtures, which could be replaced later if so desired. "The HTML body thus becomes a templating language with all the content residing in the fixtures that can be dynamically reloaded without JavaScript." JavaScript frameworks and JavaScript are leveraged for loading now, but there are issues with these, Mozumder explained. "Should we force millions of Web developers to learn JavaScript, a framework, and an associated templating language if they want a speedy, responsive Web site out-of-the-box? This is a huge barrier for beginners, and right n
Paul Merrell

Last Call Working Draft -- W3C Authoring Tool Accessibility Guidelines (ATAG) 2.0 - 1 views

  • Examples of authoring tools: ATAG 2.0 applies to a wide variety of web content generating applications, including, but not limited to: web page authoring tools (e.g., WYSIWYG HTML editors) software for directly editing source code (see note below) software for converting to web content technologies (e.g., "Save as HTML" features in office suites) integrated development environments (e.g., for web application development) software that generates web content on the basis of templates, scripts, command-line input or "wizard"-type processes software for rapidly updating portions of web pages (e.g., blogging, wikis, online forums) software for generating/managing entire web sites (e.g., content management systems, courseware tools, content aggregators) email clients that send messages in web content technologies multimedia authoring tools debugging tools for web content software for creating mobile web applications
  • Web-based and non-web-based: ATAG 2.0 applies equally to authoring tools of web content that are web-based, non-web-based or a combination (e.g., a non-web-based markup editor with a web-based help system, a web-based content management system with a non-web-based file uploader client). Real-time publishing: ATAG 2.0 applies to authoring tools with workflows that involve real-time publishing of web content (e.g., some collaborative tools). For these authoring tools, conformance to Part B of ATAG 2.0 may involve some combination of real-time accessibility supports and additional accessibility supports available after the real-time authoring session (e.g., the ability to add captions for audio that was initially published in real-time). For more information, see the Implementing ATAG 2.0 - Appendix E: Real-time content production. Text Editors: ATAG 2.0 is not intended to apply to simple text editors that can be used to edit source content, but that include no support for the production of any particular web content technology. In contrast, ATAG 2.0 can apply to more sophisticated source content editors that support the production of specific web content technologies (e.g., with syntax checking, markup prediction, etc.).
  •  
    Link is the latest version link so page should update when this specification graduates to a W3C recommendation.
Gary Edwards

Microsoft's Quest for Interoperability and Open Standards - 0 views

  •  
    Interesting article discussing the many ways Microsoft is using to improve the public perception that they are serious about interoperability and open formats, protocols and interfaces. Rocketman attended the recent ISO SC34 meeting in Prague and agrees that Microsoft has indeed put on a new public face filled with cooperation, compliance and unheard of sincerity.

    He also says, "Don't be fooled!!!"

    There is a big difference between participation in vendor consortia and government sponsored public standards efforts, and, actual implementation at the product level. Looking at how Microsoft products implement open standards, my take is that they have decided on a policy of end user choice. Their applications offer on the one hand the choice of aging, near irrelevant and often crippled open standards. And on the other, the option of very rich and feature filled but proprietary formats, protocols and interfaces that integrate across the entire Microsoft platform of desktop, devices and servers. For instance; IE8 supports 1998 HTML-CSS, but not the advanced ACiD-3 "HTML+" used by WebKit, Firefox, Opera and near every device or smartphone operating at the edge of the Web. (HTML+ = HTML5, CSS4, SVG/Canvas, JS, JS Libs).

    But they do offer advanced .NET-WPF proprietary alternative to Open Web HTML+. These include XAML, Silverlight, XPS, LINQ, Smart Tags, and OOXML. Very nice.

    "When an open source advocate, open standards advocate, or, well, pretty much anyone that competes with Microsoft (news, site) sees an extended hand from the software giant toward better interoperability, they tend to look and see if the other hand's holding a spiked club.

    Even so, the Redmond, WA company continues to push the message that it has seen the light regarding open standards and interoperability...."

Paul Merrell

Publicly Available Standards - 0 views

    • Paul Merrell
       
      This is the download page for ISO/IEC information technology standards available at no charge. The same standards are available on other ISO, IEC, and other standards organizations' web pages for a fee. If you need an ISO/IEC information technology standard, check here before you pay money for what's also given away for free. Notice that standards are arranged on the page in numerical order.
  •  
    Most ISO and IEC standards are only available for purchase. However, a few are publicly available at no charge. ISO/IEC:26300-2006 is one of the latter and can be downloaded from this page in XHTML format. Note that the standards listed on the page are arranged numerically and the OpenDocument standard is very near the bottom of the page. This version of ODF is the only version that has the legal status of an international standard, making it eligible as a government procurement specification throughout all Member nations of the Agreement on Government Procurement.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Gary Edwards

Desktop Web Applications using Sproutcore | rapid apps group - low cost, ethical web de... - 0 views

  •  
    Good article discussing the rapid advance of a WebOS for Web Applications based on the WebKit JavaScript model. Author focuses on Apple's SproutCore - Object C framework, but provides a very broad scope of discussion. Interesting stuff concerning the relationship between JavaScript, the SproutCore Framework, and Ruby. I found the link to this at the ReadWriteWeb story, "The Future of the Desktop" ........ "Desktop web applications offer the convenience of desktop applications and the interconnected power of web applications. This article looks at what they are, how they may evolve and focuses on Sproutcore, an open source framework for building them: The Internet is still evolving and the familiar struggle over who will control the platform of future web applications is still ongoing. Companies like Microsoft and Adobe provide platforms that build slick web applications but their aim is to dominate with proprietary systems that will effectively replace the browser. On the other side you have Google and Apple who have developed or support open web standards for developing web applications. If the proprietary companies win, future web applications could be locked into their systems and the incredible innovation that has driven the web to date may begin to falter.
Gary Edwards

Official Google Webmaster Central Blog: Introducing Rich Snippets - 0 views

  •  
    Google "Rich Snippets" is a new presentation of HTML snippets that applies Google's algorithms to highlight structured data embedded in web pages. Rich Snippets give end-users convenient summary information about their search results at a glance. Google is currently supporting a very limited subset of data about reviews and people. When searching for a product or service, users can easily see reviews and ratings, and when searching for a person, they'll get help distinguishing between people with the same name. It's a simple change to the display of search results, yet our experiments have shown that users find the new data valuable. For this to work though, both Web-masters and Web-workers have to annotate thier pages with structured data in a standard format. Google snippets supports microformats and RDFa. Existing Web data can be wrapped with some additional tags to accomplish this. Notice that Google avoids mention of RDF and the W3C's vision of a "Semantic Web" where Web objects are fully described in machine readable semantics. Over at the WHATWG group, where work on HTML5 continues, Google's Ian Hickson has been fighting RDFa and the Semantic Web in what looks to be an effort to protect the infamous Google algorithms. RDFa provides a means for Web-workers, knowledge-workers, line-of-business managers and document generating end-users to enrich their HTML+ with machine semantics. The idea being that the document experts creating Web content can best describe to search engine and content management machines the objects-of-information used. The google algorithms provide a proprietary semantics of this same content. The best solution to the tsunami of conten the Web has wrought would be to combine end-user semantic expertise with Google algorithms. Let's hope Google stays the RDFa course and comes around to recognize the full potential of organizing the world's information with the input of content providers. One thing the world desperatel
Gary Edwards

Good News for Ajax and the Open Web - The Browser Wars Are Back - 0 views

  • For much of this decade, Web browsing has been dominated by Microsoft's Internet Explorer (IE), which at its height achieved market share numbers approaching 95%, with the result that Microsoft owned a de facto standard for the Web and held effective veto power over the future of HTML. During much of this period, Microsoft suspended development of IE, with the result that virtually no new features appeared within the world's dominant browser from 2001 to 2006. But while IE was sleeping, one of the biggest phenomena of the computer age happened: Ajax. Clever Web developers discovered gold in them there mountains. Using Ajax techniques, Web developers could create desktop-like rich user interfaces right in the browser. Not only that, Ajax was evolutionary. Ajax offered an incremental path from the industry's existing HTML-based infrastructure and know-how, allowing Web developers to add rich Ajax elements to an existing HTML page.
  • A companion community effort helping to accelerate the adoption of open standards is the Web Standards Project (http://www.webstandards.org), which is producing a set of "acid tests" that verify browser support for Open Web technologies, such as HTML, CSS and JavaScript. Acid2 is focused mainly on CSS support, and is now supported by Opera, Safari/WebKit, and IE. Acid3 (http://www.webstandards.org/action/acid3) tests DOM scripting, CSS rendering,
    • Gary Edwards
       
      The amazing thing about Ajax and the Open Web is the way WHATWG, WebKit, and the Web Standards "ACID" work has accelerated Open Web Standards, pushing far beyond the work of the glacial W3C.
  • Runtime Advocacy Task Force
  •  
    Lengthy artilce from the OpenAjax Alliance summarizing HTML, Ajax and the future of the Open Web. Very well referenced. Lots of whitepapers and links
  •  
    good summarization of the Open Web future.
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Gary Edwards

Ajaxian » In Praise of Evolvable Systems - 0 views

  •  
    "Why something as poorly designed as the Web became The Next Big Thing, and what that means for the future." Well designed but "fixed" systems were over taken by the evolvable but poorly designed Web. I'm wonderig if these same "evolving" principles apply to standard organizations? Put WebKit up against the standard orgs in charge of key WebKit components, and you see clearly that WebKit would fail misably if they stuck to the hapless efforts of the W3C, Ecma and ISO. Besides the fact that entrenched players such as Microsoft are sitting on those standards orgs in position to dumb down or put into terminal stall much needed innovations. For instance, WebKit deepneds on HTML5, CSS3, SVG, and JavaScript. All of which are stalled at various standards orgs. As a reaction to this org stall, the WebKit group pushes forward anyway relying instead on OSS Community style innovation and consensus model sharing.
Gary Edwards

Meteor: The NeXT Web - 0 views

  •  
    "Writing software is too hard and it takes too long. It's time for a new way to write software - especially application software, the user-facing software we use every day to talk to people and keep track of things. This new way should be radically simple. It should make it possible to build a prototype in a day or two, and a real production app in a few weeks. It should make everyday things easy, even when those everyday things involve hundreds of servers, millions of users, and integration with dozens of other systems. It should be built on collaboration, specialization, and division of labor, and it should be accessible to the maximum number of people. Today, there's a chance to create this new way - to build a new platform for cloud applications that will become as ubiquitous as previous platforms such as Unix, HTTP, and the relational database. It is not a small project. There are many big problems to tackle, such as: How do we transition the web from a "dumb terminal" model that is based on serving HTML, to a client/server model that is based on exchanging data? How do we design software to run in a radically distributed environment, where even everyday database apps are spread over multiple data centers and hundreds of intelligent client devices, and must integrate with other software at dozens of other organizations? How do we prepare for a world where most web APIs will be push-based (realtime), rather than polling-driven? In the face of escalating complexity, how can we simplify software engineering so that more people can do it? How will software developers collaborate and share components in this new world? Meteor is our audacious attempt to solve all of these big problems, at least for a certain large class of everyday applications. We think that success will come from hard work, respect for history and "classically beautiful" engineering patterns, and a philosophy of generally open and collaborative development. " .............. "It is not a
  •  
    "How do we transition the web from a "dumb terminal" model that is based on serving HTML, to a client/server model that is based on exchanging data?" From a litigation aspect, the best bet I know of is antitrust litigation against the W3C and the WHATWG Working Group for implementing a non-interoperable specification. See e.g., Commission v. Microsoft, No. T-167/08, European Community Court of First Instance (Grand Chamber Judgment of 17 September, 2007), para. 230, 374, 421, http://preview.tinyurl.com/chsdb4w (rejecting Microsoft's argument that "interoperability" has a 1-way rather than 2-way meaning; information technology specifications must be disclosed with sufficient specificity to place competitors on an "equal footing" in regard to interoperability; "the 12th recital to Directive 91/250 defines interoperability as 'the ability to exchange information and mutually to use the information which has been exchanged'"). Note that the Microsoft case was prosecuted on the E.U.'s "abuse of market power" law that corresponds to the U.S. Sherman Act § 2 (monopolies). But undoubtedly the E.U. courts would apply the same standard to "agreements among undertakings" in restraint of trade, counterpart to the Sherman Act's § 1 (conspiracies in restraint of trade), the branch that applies to development of voluntary standards by competitors. But better to innovate and obsolete HTML, I think. DG Competition and the DoJ won't prosecute such cases soon. For example, Obama ran for office promising to "reinvigorate antitrust enforcement" but his DoJ has yet to file its first antitrust case against a big company. Nb., virtually the same definition of interoperability announced by the Court of First Instance is provided by ISO/IEC JTC-1 Directives, annex I ("eye"), which is applicable to all international standards in the IT sector: "... interoperability is understood to be the ability of two or more IT systems to exchange information at one or more standardised interfaces
Gary Edwards

Mozilla Standards Blog » Blog Archive » Fear and Loathing on the Standards Tr... - 0 views

  •  
    everything we do here at Mozilla is, for the most part, a contribution to the Web platform. I blogged previously about the low esteem I reserve for arguments that favor proprietary platforms (which typically pit rapid proprietary innovation against dawdling Web Platform standardization cycles), but even in that upbeat blog post, I acknowledge that the standards process leaves much room for improvement.
Paul Merrell

Microsoft breaks IE8 interoperability promise | The Register - 0 views

  • In March, Microsoft announced that their upcoming Internet Explorer 8 would: "use its most standards compliant mode, IE8 Standards, as the default." Note the last word: default. Microsoft argued that, in light of their newly published interoperability principles, it was the right thing to do. This declaration heralded an about-face and was widely praised by the web standards community; people were stunned and delighted by Microsoft's promise. This week, the promise was broken. It lasted less than six months. Now that Internet Explorer IE8 beta 2 is released, we know that many, if not most, pages viewed in IE8 will not be shown in standards mode by default.
  • How many pages are affected by this change? Here's the back of my envelope: The PC market can be split into two segments — the enterprise market and the home market. The enterprise market accounts for around 60 per cent of all PCs sold, while the home market accounts for the remaining 40 per cent. Within enterprises, intranets are used for all sorts of things and account for, perhaps, 80 per cent of all page views. Thus, intranets account for about half of all page views on PCs!
  •  
    Article by Hakon Lie of Opera Software. Also note that acdcording to the European Commission, "As for the tying of separate software products, in its Microsoft judgment of 17 September 2007, the Court of First Instance confirmed the principles that must be respected by dominant companies. In a complaint by Opera, a competing browser vendor, Microsoft is alleged to have engaged in illegal tying of its Internet Explorer product to its dominant Windows operating system. The complaint alleges that there is ongoing competitive harm from Microsoft's practices, in particular in view of new proprietary technologies that Microsoft has allegedly introduced in its browser that would reduce compatibility with open internet standards, and therefore hinder competition. In addition, allegations of tying of other separate software products by Microsoft, including desktop search and Windows Live have been brought to the Commission's attention. The Commission's investigation will therefore focus on allegations that a range of products have been unlawfully tied to sales of Microsoft's dominant operating system." http://europa.eu/rapid/pressReleasesAction.do?reference=MEMO/08/19&format=HTML&aged=0&language=EN&guiLanguage=en
Paul Merrell

Digital Web Magazine - HTML5, XHTML2, and the Future of the Web - 0 views

  •  
    The browser-centric view of why HTML5 is better than XHTML2. Notice that the entire discussion does not address the need for interoperable data exchange between different web applications, let alone for their interaction with more traditional desktop or mobile device editors. HTML5 is enormously under-specified for data exchange among anything but web browsers. As only one small example, neither HTML5 nor CSS Selectors have a specified standard element for footnotes and footnote calls, let alone attributes for their numbering style, formatting, and location. And even if CSS Selectors included such elements and attributes, CSS lives in web site page templates, not in the web app editors for site content that use HTML forms. Easy pickings for Microsoft and its proprietary stack that does interoperably integrate the desktop, servers, devices, and the Web.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Gary Edwards

Apple's extensions: Good or bad for the open web? | Fyrdility - 0 views

  •  
    Fyrdility asks the question; when it comes to the future of the Open Web, is Apple worse than Microsoft? He laments the fact that Apple pushes forward with innovations that have yet to be discussed by the great Web community. Yes, they faithfully submit these extensions and innovations back to the W3C as open standards proposals, but there is no waiting around for discussion or judgement. Apple is on a mission.

    IMHO, what Apple and the WebKit community do is not that much different from the way GPL based open source communities work, except that Apple works without the GPL guarantee. The WebKit innovations and extensions are similar to GPL forks in the shared source code; done in the open, contributed back to the community, with the community responsible for interoperability going forward.

    There are good forks and there are not so good forks. But it's not always a technology-engineering discussion that drives interop. sometimes it's marketshare and user uptake that carry the day. And indeed, this is very much the case with Apple and the WebKit community. The edge of the Web belongs to WebKit and the iPhone. The "forks" to the Open Web source code are going to weigh heavy on concerns for interop with the greater Web.

    One thing Fyrdility fails to recognize is the importance of the ACiD3 test to future interop. Discussion is important, but nothing beats the leveling effect of broadly measuring innovation for interop - and doing so without crippling innovation.

    "......Apple is heavily involved in the W3C and WHATWG, where they help define specifications. They are also well-known for implementing many unofficial CSS extensions, which are subsequently submitted for standardization. However, Apple is also known for preventing its representatives from participating in panels such as the annual Browser Wars panels at SXSW, which expresses a much less cooperative position...."
Gary Edwards

Wary of Upsetting Mighty Microsoft, Acer Limits Use Android for Phones, Not Netbooks. - 0 views

  •  
    "For a netbook, you really need to be able to view a full Web for the total Internet experience, and Android is not that yet," Jim Wong, head of Acer's IT products, said Tuesday while introducing a new line of computers."

    Right. Android runs the webkit/Chromium browser based on the same WebKit code base used by Apple iPhone/Safari, Google Chrome, Palm Pre, Nokia s60 and QT IDE, 280 Atlas WebKit IDE, SproutCore-Cocoa project, KOffice, Sun's javaFX, Adobe AiR, and Eclipse "Blinki", Eclipse SWT, Linux Midori, and the Windows CE IRiS browser - to name but a few. Other Open Web browsers Opera and Mozilla Firefox have embraced the highly interactive and very visual WebKit document and application model. Add to this WebKit tsunami the many web sites, applications and services that adopted the WebKit document model to become iPhone ready.

    Finally there is this; any browser, application or web server seekign to pass the ACiD-3 test is in effect an effort to become fully WebKit compliant.

    Maybe Mr. Wong is talking about the 1998 Internet experience supported by IE8? Or maybe there is a secret OEM agreement lurking in the background here. The kind that was used by Microsoft to stop Netscape and Java way back when.

    The problem for Microsoft is that, when it comes to smartphones, countertops and netbooks at the edge of the Web, they are not competing against individual companies pushing device and/or platform specific services. This time they are competing against the next generation Open Web. An very visual and interactive Open Web defined by the surge the WebKit, Firefox and the many JavaScript communities are leading.

    ge
  •  
    The Information Week page bookmarked says "NON-WORKING URL! The URL (Web address) that has been entered is directing to a non-existent page" Try this instead http://www.informationweek.com/news/hardware/handheld/showArticle.jhtml?articleID=216403510 Acer To Use Android For Phones, Not Netbooks April 8, 2009
  •  
    Microsoft conspiracies have happened in the past and we should watch for them. However, another explanation is that Android does not (yet) support many browser plugins. No doubt that is what the Microsoft drones remind Acer each time they meet with them, along with a pitch for Silverlight 2 !! For me, Silverlight 2 is so rare that I would not, personally, make it a requirement for a "full web". A non-Android Linux distribution on a netbook that ran Adobe Flash, Acrobat Reader, OpenOffice.org and AIR when necessary would suit me fine. One day Android may do all these things to, but for now Google has bigger fish to fry!
1 - 20 of 104 Next › Last »
Showing 20 items per page