Skip to main content

Home/ Document Wars/ Group items tagged need

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gary Edwards

Slamming the door shut on MS OOXML - 0 views

  • So your goal is a networked world where metadata is routinely trashed by apps developed by those who are too dumb or otherwise disabled to preserve metadata and only the big boys get to do interoperability, right? So if I send you a document for your editing, I can't count on getting it back with xml:id attributes intact. No thanks, Patrick. That sounds way too much like how things have worked ever since office productivity software first came on the market. In your world, interoperability belongs only to those who can map features 1:1 with the most featureful apps. And that is precisely why OpenDocument never should have been approved as a standard. Your kind of interoperability makes ODF a de facto Sun Microsystems standard wearing the clothing of a de jure standard. Why not just standardize the whole world on Microsoft apps and be done with it? Are two monopolies maintained by an interoperability barrier between them better than one? Fortunately, we don't have to debate the issue because the Directives resolve the issue. You lose under the rules of the game.
  •  
    Marbux on metadata and the language of universal interoperability: Few people are aware of the raging debate that has pushed ODF to the edge. The OASIS ODF TC is split between those who support Universal Interoperability, and those who insist on continuing with limited ODF interoperability.

    ODF (OpenDocument), formally known as Open Office XML, began it's standards life in the fall of 2002 when Sun submitted the OpenOffice file format to OASIS for consideration as a office suite XML fiel format standard. The work on ODF did not start off as a clean slate in that there were near 600 pages of application specific specification from day one of the standards work. The forces of universal interop have sought for years to separate ODF from the application specific features and implementation model of OpenOffice that began with those early specification volumes, and continues through the undue influence Sun continues to have over the ODF specification work.

    Many mistakenly believed that submission of ODF to ISO and subsequent approval as an international standard would provide an effective separation, putting ODF on the track of a truly universal file format.

    Marbux is one of those Universal Interop soldiers who has dug in his heels, cried to the heavens that enough is enough, and demanded the necessary changes to ODF interoperability language.

    This post he recently submitted to the OASIS ODF Metadata SC is a devastating rebuttal to the arguments of those who support the status quo of limited interoperability.

    In prior posts, marbux argues that ISO directives demand without compromise universal interoperability. This demand is also shared by the World Trade Organization directives regarding international trade laws and agreements. Here he brings those arguments together with the technical issues for achieving universal interop.

    It's a devastating argument.

Gary Edwards

Microsoft playing three card monte with XML conversion, with Sun as the "outside man" w... - 0 views

  • In a highly informative post to his Open Stack blog Wednesday, Edwards explains how three key features are necessary for organizations to convert to open formats. These are: Conversion Fidelity - the billions of binaries problem Round Trip Fidelity - the MSOffice bound business processes, line of business integrated apps, and assistive technology type add-ons Application Interop - the cross platform, inter application, cross information domain problem
  •  
    Dana Blankenhorn posted this article back in March of 2007.  It was right at the time when the OASIS ODF TC and Metadata XML/RDF SC (Sub Committee) were going at it hammer and tong concerning three very important file format characteristics needed to fulfill a real world interoperability expectation:

    .... Compatibility - file format level interop -
    :::  backwards compatibility / compatibility with existing file formats, including the legacy of billions of binary Microsoft documents

    ....... Interoperability - application level interop-
    ::::::  application interoperability including interop with all Microsoft applications

Gary Edwards

IBM's Potempkin Village | Florian Reuter's Weblog - Flock - 0 views

  • I think that contradicts the SISSL :-)
  •  
    Recently IBM held a ODF Interoperability Workshop at the OpenOffice annual conference in Barcelona, Spain. The Workshop was organized by IBM's Rob Weir. In this blog, uber document processing expert Florian Reuter opens the lid for a peek at what really happened at the Workshop. And it wasn't "interoperability". As a Novell employee, Florian is unable to comment publicly as to what really happened in Barcelona. But to those who are not under IBM's oppressive thumb, the results of this fiasco are laughable. Sure IBM and Rob Weir are busy threatening individuals, and bribing the press to suppress the reality of this horrific ODF ZERO Interop demonstration. But that doesn't mean those who really care can't talk about it. The OpenDocument Foundation has of course been screaming about the ODF interop problems. But we've been focused on the big picture of world wide market requirements; the need for ODF to be compatible with existing file formats and interoperable with existing applications - including Microsoft documents and applications. Of course, this level of interoperability was outside the scope of ODF purpose and work. We apologize for daring to suggest that real world implementation issues are important and ought to be considered. but there remains the issue of ODF interoperability which also sucks beyond belief. The exact same principles apply. ODF interop depends on complete application independence, and ODF remains bound to OpenOffice. Now i'm someone who has publicly championed ODF interoperability. I've spent years championing the fact that ODF can meet all market requirements for interoperability. And whatever credibility i thought i might have is now destroyed by that very public and very in your face lack of interoperability.

    So here i am, with any credibility i might have ever had resting on the pretensions of a self proclaimed clown (a hef="http://wordnet.princeton.edu/perl/webwn?s=antic">his description not mine). Can R
  •  
    Let's do this again:

    Recently IBM held a ODF Interoperability Workshop at the OpenOffice annual conference in Barcelona, Spain. The Workshop was organized by IBM's Rob Weir. In this blog, uber document processing expert Florian Reuter opens the lid for a peek at what really happened at the Workshop. And it wasn't "interoperability".

    As a Novell employee, Florian is unable to comment publicly as to what really happened in Barcelona. But to those who are not under IBM's oppressive thumb, the results of this fiasco are laughable. Sure IBM and Rob Weir are busy threatening individuals, and bribing the press to suppress the reality of this horrific ODF ZERO Interop demonstration. But that doesn't mean those who really care can't talk about it.

    The OpenDocument Foundation has of course been screaming about the ODF interop problems. But we've been focused on the big picture of world wide market requirements; the need for ODF to be compatible with existing file formats and interoperable with existing applications - including Microsoft documents and applications.

    Of course, this level of interoperability was way outside the scope of ODF purpose and work. We apologize for daring to suggest that real world implementation issues are important and ought to be considered. But there remains the issue of ODF interoperability which also sucks beyond belief.

    The exact same principles apply. ODF interop depends on complete application independence, and ODF remains bound to OpenOffice.

    Now i'm someone who has publicly championed ODF interoperability. I've spent years championing the fact that ODF can meet all market requirements for interoperability. And whatever credibility i thought i might have is now destroyed by that very public and very in your face lack of interoperability.

    So here i am, with any credibility i might have ever had resting on the pretensions of a self proclaimed clown (http://wordnet.princ
Gary Edwards

LOL :: Microsoft's Jean Paoli on the XML document debate - 0 views

  • What’s distinctive about the goals of OOXML? Primarily, to have full fidelity with pre-existing binary documents created in Microsoft Office. “What people want is to make sure that their billions of important documents can be saved in a format where they don’t lose any information. As a design goal, we said that those formats have to represent all the information that enables high-fidelity migration from the binary formats”, says Paoli. He mentions work with institutions including the British Library and the US Library of Congress, concerned to preserve the information in their electronic archive. I asked Paoli if such users could get equally good fidelity by converting their documents to ODF. “Absolutely not,” he says. “I am very clear on that. Those two formats are done for different reasons.” What can go wrong? Paoli gives as an example the myriad ways borders can be drawn round tables in Microsoft Office and all its legacy versions. “There are 100 ways to draw the lines around a table,” he says. “The Open XML format has them all, but ODF which has not been designed for backward compatibility, does not have them. It’s really the tip of the iceberg. So if someone translates a binary document with a table to ODF, you will lose the framing details. That is just a very small example.”
  • “Open Document Format and Office Open XML have very different goals”, says Paoli, responding to the claim that the world needs only one standard XML format for office documents. “Both of them are formats for documents … both are good.”
    • Gary Edwards
       
      The door should have been slammed shut on OOXML near five years ago when, on December 14th, 2006, at the very first OASIS ODF TC meeting, Stellent's Phil Boutros proposed that the charter include, "compatibility with existing file formats and interoperability with existing applications" as a priority objective.
  • I put it to Paoli that OOXML is hard to implement because of all its legacy support, some of which is currently not well documented. “I don’t believe that at all. It’s actually the opposite,” he says. He make the point that third parties like Corel, which have previously implemented support for binary formats like .doc and .xls, should find it easy to transition to OOXML. “We believe Open XML adoption by vendors like Corel will be very easy because they have already been doing 90% of the work, doing the binary formats. The features are already there.”
    • Gary Edwards
       
      WordPerfect does an excellent import of MSWord .doc documents. But there is no conversion! It's a read only rendering. Once you start editing the document in WP, all kinds of funny things happen, and the perfect fidelity melts away like the wicked witch of west in a bucket full of water.
  • ...5 more annotations...
  • Another benefit Paoli claims for OOXML is performance. “A lot of things are designed differently because we believe it will work faster. The spreadsheet format has been designed for very big spreadsheets because we know our users, especially in the finance industry, use very large spreadsheets.
    • Gary Edwards
       
      Wrong. The da Vinci plug-in prototype we demonstrated to Massachusetts on June 19th, 2006 proved that there is little or no difference in spreadsheet performance between a OOXML file, and an ODF file.

      In fact, ODF version of the extremely large test file beat the OOXML load by 12 seconds.

      Where the performance difference comes in is at the application level. MS Excel can load a OOXML version of a large spreadsheet faster than OpenOffice can load an ODF version of that same spreadsheet.

      If you eliminate the application differential, and load the OOXML file and the ODF version of that same spreadsheet into a plug-in enabled Excel, the performance differences are negligible.

      The reason for this is that the OOXML plug-in for Excel has a conversion overhead identical to the da Vinci plug-in for Excel. It has nothing to do with the file format, and everythign to do with the application.

      ~ge~
  • Paoli points to the conversion errors as evidence of how poorly ODF can represent legacy Office documents. My hunch is that this has more to do with the poor quality of the converter.
    • Gary Edwards
       
      Note that these OASIS ODF TC November 20th iX "interoperability enhancement" suggestions were submitted by Novell as part of their effort to perfect a OOXML plug-in for OpenOffice!!!!

      "Lists" were th first of these iX items to be submitted as formal proposal. And Sun fought that list proposal viciously for the next four months. The donnybrook resulted i a total breakdown of the ODF consensus process. But, it ensured that never again would anyone be stupid enough to challenge Sun's authority and control of the OASIS ODF TC.

      Sun made it clear that they would viciously oppose any other efforts to establish interoperability with existing Microsoft documents, applications, processes effort.

      Point taken.

      ~ge~
  • the idea that Sun is preparing a reference implementation of OOXML is laughable.
    • Gary Edwards
       
      Sorry Tim. It's true. Sun and Novell are working together to develop native OOXML file support in OpenOffice. You can find this clearly stated in the Gullfoss Planet OpenOffice blogs.

      The funny thing is that Sun will have to implement and support the November 20th iX enhancements submitted by Novell!! (Or, the interoperability frameworks also submitted by Novell in February of 2007). There is simply no other way for OpenOffice to implement OOXML with the needed fidelity.

      ~ge~
  • One of new scenarios enabled by the “custom xml parts” (again, if you read their blogs, you must have heard of this stuff) is the ability to bind xml sources and a control+layout so that it enables the equivalent of data queries (we’ve had in Excel for many years already), just with a source which is part of the package, contrary to the typical external data source connection. Well this stuff, besides the declaration (which includes, big surprise, GUIDs and stuff like that) requires the actual Office 2007 run-time to work. So whenever MS says this stuff is interoperable, they cannot mean you can take this stuff away in another application. Because you can’t. This binding is more or less the same than the embedding of VBA macros. It’s all application-specific, and only Microsoft’s own suite knows how to instantiate this stuff.
    • Gary Edwards
       
      Stephan whacks this one out of the park! Smart Documents will replace VBa scripts, macros and OLE functionality going forward. It's also the data binding - workflow and metadata model of the future. And it's all proprietary!

      It's the combination of OOXML plus the MSOffice- Vista Stack specific Smart Documents that will lock end users into the Vista Stack for years to come.

      Watch out Google!

      ~ge~
  • Has Microsoft published the .doc spec publicly? Then why should ODF worry about the past? It’s not ODF’s concern to worry about Microsoft’s past formats. (Understand that the .doc format alone changed six times in the last eight versions of Office!) That’s Microsoft’s legacy problem, not ODF’s.
    • Gary Edwards
       
      There really is no need to access the secret binary blueprints. The ACME 376 plug-in demonstration proves this conclusively. The only thing the ACME 376 demo lacks is that we didn't throw the switch on the magic key to release all VBa scripts, macros and OLE bindings to ACME. But that can be done if someone is serious about converting the whole shebang of documents, applications and processes.

      The real problem is that although ACME 376 proves we can hit the high fidelity required, it is impossible to effectively capture that fidelity in ODF without the iX interoperability enhancements. The world expects ODF interoperability. But as long as Sun opposes iX, we can't pipe from ACME 376 to ODF.

      ~ge~
  •  
    Tim Anderson interviews Microsoft's Jean Paoli about MOOXML and ODF.    Jean Paoli of course has the predictable set of answers.  But Tim anderson provides us with some interesting insights and comments of his own.  There is also a gem of a comment from Stephane Rodriquez, the reknown spreadsheet expert.

    The bottom line for Microsoft has not changed.  MOOXML exists because of the need for an XML file format compatible with the legacy of existing MSOffic ebinary documents.  He claims that ODF is not compatible, and offers the "page borders" issue as an example.

    Page borders?  What's that got to do with the ODF file format?   These are application specific, application bound proprietary graphics that can not be ported to any other application - like OpenOffice.  The reason has nothign whatsoever to do with ODF and everything to do with the fact that the page border library is bound to MSOffice and not available to other applications like OpenOffice. 

    So here is an application specific feature tha tJean Paoli claims can not be expressed in ODF, but can in MOOXML.  But when are running the da Vinci ODF plugin in MSWord, there is no problem whatsoever in capturing the page borders in ODF!!!!!!!!!!!!!!!!!!!!!!!!!!!  No problem!!!!!!!!!!

    The problem is opening up that same da Vinci MSWord document in OpenOffice.  That's where the page borders are dropped.  The issue is based entirely on the fact that OpenOffice is unable to render these MSWord specific graphics bound to an MSOffice only library.

    If however we take that same page border loaded da Vinci MSWord document, and send it half way across the world to another MSWord desktop running da Vinci, the da Vinci plugin easily loads the ODF document into MSWord where it is perfectly rendered, page borders and all!!!!!!!!

    Now i will admit that this is one very difficult issue to understand.  If not f
  •  
    Great interview. Tim can obviously run circles around poor Jean Paoli.
Gary Edwards

Is It Game Over? - ODF Advocate Andy UpDegrove is Worried. Very Worried - 0 views

  • This seems to me to be a turning point for the creation of global standards. Microsoft was invited to be part of the original ODF Technical Committee in OASIS, and chose to stand aside. That committee tried to do its best to make the standard work well with Office, but was naturally limited in that endeavor by Microsoft's unwillingness to cooperate. This, of course, made it easier for Microsoft to later claim a need for OOXML to be adopted as a standard, in order to "better serve its customers." The refusal by an incumbent to participate in an open standards process is certainly its right, but it is hardly conduct that should be rewarded by a global standards body charged with watching out for the best interests of all.
  •  
    Andy UpDegrove takes on the issue of Microsoft submitting their proprietary "XML alternative to PDF" proposal to Ecma for consideration as an international standard.  MS XML-PDF will compliment ECMA 376 (OOXML - OfficeOpenXML) which is scheduled for ISO vote in September of 2007.  Just a bit over 60 days from today.

    Andy points out some interesting things; such as the "Charter" similarities between MS XML-PDF and MS OOXML submisssions to Ecma:

    MS XML-PDF Scope: The goal of the Technical Committee is to produce a formal standard for office productivity applications within the Ecma International standards process which is fully compatible with the Office Open XML Formats. The aim is to enable the implementation of the Office Open XML Formats by a wide set of tools and platforms in order to foster interoperability across office productivity applications and with line-of-business systems. The Technical Committee will also be responsible for the ongoing maintenance and evolution of the standard.   Programme of Work: Produce a formal standard for an XML-based electronic paper format and XML-based page description language which is consistent with existing implementations of the format called the XML Paper Specification,…[in each case, emphasis added]

    If that sounds familiar, it should, because it echoes the absolute directive of the original OOXML technical committee charter, wh
Gary Edwards

Between a rock and a hard place: ODF & CIO's - Where's the Love? - 0 views

  • So I'm disappointed. And not just on behalf of open documents, but on behalf of the CIOs of this country, who are now caught between a rock and a hard place, without a paddle to defend themselves with if they won't to do anything new, innovative and necessary, if a major vendor's ox might be gored in consequence. After the impressive lobbying assault mounted over the past six months against open document format legislation, I expect you won't be hearing of many state IT departments taking the baton back from their legislators.    And who can blame them? If they tried, it wouldn't be likely to be anything as harmless as an open document format that would bite them in the butt.
  •  
    Andy Updegrove weighs in on the wave of ODF legislative failures first decribed by Eric Lai and Gregg Keizer compiled the grim data in a story they posted at ComputerWorld last week titled  Microsoft trounces pro-ODF forces in state battles over open document formats.


    Andy believes that it is the failure of state legislators to do their job that accounts for these failures.  He provides three reasons for this being a a failure of legislative duty.  The most interesting of which is claim that legislators should be protecting CIO's from the ravages of aggressve vendors. 


    The sad truth is that state CIO's are not going to put their careers on the line for a file format after what happened in Massachusetts.


    Andy puts it this way, "
      

    And second, in a situation like this, it is a cop out for legislatures to claim that they should defer to their IT departments to make decisions on open formats.  You don't have to have that good a memory to recall why these bills were introduced in the first place: not because state IT departments aren't a good place to make such decisions, but because successive State CIOs in Massachusetts had been so roughly handled in trying to make these very decisions that no state CIO in his or her right mind was likely to volunteer to be the next sacrificial victim.
    As both Peter Quinn and Louis Gutierrez both found out, trying to make responsible standards-related decisions whe
Gary Edwards

Linux Foundation Legal : Behind Putting the OpenDocument Foundation to Bed (without its... - 0 views

  • CDF is one of the very many useful projects that W3C has been laboring on, but not one that you would have been likely to have heard much about. Until recently, that is, when Gary Edwards, Sam Hiser and Marbux, the management (and perhaps sole remaining members) of the OpenDocument Foundation decided that CDF was the answer to all of the problems that ODF was designed to address. This announcement gave rise to a flurry of press attention that Sam Hiser has collected here. As others (such as Rob Weir) have already documented, these articles gave the OpenDocument Foundation’s position far more attention than it deserved. The most astonishing piece was written by ZDNet’s Mary Jo Foley. Early on in her article she stated that, “the ODF camp might unravel before Microsoft’s rival Office Open XML (OOXML) comes up for final international standardization vote early next year.” All because Gary, Sam and Marbux have decided that ODF does not meet their needs. Astonishing indeed, given that there is no available evidence to support such a prediction.
  •  
    Uh?  The ODF failure in Massachusetts doesn't count as evidence that ODF was not designed to be compatible with existing MS documents or interoperable with existing MSOffice applications?

    And it's not just the da Vinci plug-in that failed to implement ODF in Massachusetts!  Nine months later Sun delivered their ODF plug-in for MSOffice to Massachusetts.  The next day, Massachusetts threw in the towel, officially recognizing MS-OOXML (and the MS-OOXML Compatibility Pack plug-in) as a standard format for the future.

    Worse, the Massachusetts recognition of MS-OOXML came just weeks before the September 2nd ISO vote on MS-OOXML.  Why not wait a few more weeks?  After all, Massachusetts had conducted a year long pilot study to implement ODF using ODF desktop office sutie alternatives to MSOffice.  Not only did the rip out and replace approach fail, but they were also unable to integrate OpenOffice ODF desktops into existing MSOffice bound workgroups.

    The year long pilot study was followed by another year long effort trying to implement ODF using the plug-in approach.  That too failed with Sun's ODF plug-in the final candidate to prove the difficulty of implementing ODF in situations where MSOffice workgroups dominate.

    California and the EU-IDABC were closely watching the events in Massachusetts, as was most every CIO in government and private enterprise.  Reasoning that if Massachusetts was unable to implement ODF, California CIO's totally refused IBM and Sun's effort to get a pilot study underway.

    Across the pond, in the aftermath of Massachusetts CIO Louis Guiterrez resignation on October 4th, 2006, the EU-IDABC set about developing their own file format, ODEF.  The Open Document Exchange Format splashed into the public discussion on February 28th, 2007 at the "Open Document Exchange Workshop" held in Berlin, Germany.

    Meanwhile, the Sun ODF plug-in is fl
Gary Edwards

NYS Open Records Discussion Must Recognize Technical Requirements - 0 views

  •  
    While the workgroup failed to decide between "choice" (Microsoft's mantra) and "openness" (the ODF mantra), predictably punting this question to a new Electronic Records Committee, it did issue a number of interesting findings, the most important of which reads as follows: In the office suite format debate, there currently is no compelling solution for the State's openness needs. The State needs open standards and formats. Simultaneously, the State needs electronic records to be preserved in their original formats whenever possible. Many Request for Public Comments commenters, particularly in response to the e-discovery questions, stated preserving a record in the same format as it was created results in a more faithful record and diminishes the possibility of expensive e-discovery disputes. This is important to ensure future generations of New Yorkers can access the permanently valuable electronic records being created today. Moreover, State Archives emphasizes creating records in open formats makes it easier to preserve their essential characteristics and demonstrates they are authentic (i.e., they were created in the course of State government business and have not been altered without proper authorization). I imagine that the workgroup must have found some level of solace in arriving at the one conclusion that all the experts seem to agree on: that electronic documents should be published using the same format in which they are created. If this principle held true for state documents, it would reduce the job of the new Electronic Records Committee to deciding between three alternatives: (1) require all state agencies to create and publish their documents in OOXML, (2) require all state agencies to create and publish their documents in ODF, or (3) allow each agency to decide which of these formats, OOXML or ODF, they will use in creating and publishing their documents. Unfortunately, this central assumption is incorrect, and adopting it as a basi
Gary Edwards

ODF and OOXML - The Final Act - 0 views

  • The format war between Microsoft’s Open Office XML (OOXML) and the open source OpenDocument Format (ODF) has flared up again, right before the looming second OOXML ISO vote in March.
  • “ISO has a policy that, wherever possible, there should only be one standard to maximise interoperability and functionality. We have an international standard for digital documentation, ODF,” IBM’s local government programs executive Kaaren Koomen told AustralianIT.
  • ODF has garnered some criticism for being a touch limited in scope, however, one of its strengths is that it has already been accepted as a worldwide ISO standard. Microsoft’s format on the other hand, has been criticised for being partially proprietary, and even a sly attempt by the software giant to hedge its bets and get in on open standards while keeping as many customers locked into its solutions as possible.
    • Gary Edwards
       
      A "touch limited in scope"? Youv'e got to be kidding. ODF was not defined to be compatible with the billions of MSOffice binary (BIN) documents. Nor was it designed to further interoperability with MSOffice.
      Given that there are over 550 million MSOffice desktops, representing upwards of 95% of all desktop productivity environments, this discrepancy of design would seem to be a bit more than a touch limited in scope!
      Many would claim that this limitation was due to to factors: first that Microsoft refused to join the OASIS ODF TC, which would have resulted in an expanded ODF designed to meet the interoperability needs of the great herd of 550 million users; and second, that Microsoft refused to release the secret binary blueprints.
      Since it turns out that both IBM and Sun have had access to the secret binary blueprints since early 2006, and in the two years since have done nothing to imptove ODF interop and conversion fidelity, this second claim doesn't seem to hold much water.
      The first claim that Microsoft didn't participate in the OASIS ODF process is a bit more interesting. If you go back to the first OASIS ODF Technical Committee meeting, December 16th, 2002, you'll find that there was a proposal to ammend the proposed charter to include the statemnt that ODF (then known as Open Office XML) be compatible with existing file formats, including those of MSOffice. The "MSOffice" reference was of course not included because ODF sought to be application, platform and vendor independent. But make no mistake, the discussion that day in 2002 was about compatibility and the conversion of the legacy BIN's into ODF.
      The proposal to ammend the charter was tabled. Sun objected, claiming that people would interpret the statement as a direct reference to the BIN's, clouding the charter's purpose of application, platform and vendor independence. They proposed that the charter ammendment b
    • Gary Edwards
       
      Will harmonization work? I don't think so. The problem is that the DIN group is trying to harmonize two application specific formats. OpenOffice has one way of implementing basic document structures, and MSOffice another. These differences are directly reflected in the related formats, ODF and OOXML. Any attempts to harmonize ODF and OOXML will require that the applications, OpenOffice and MSOffice, be harmonized! There is no other way of doing this unless the harmonized spec has two different methods for implementing basic structures like lists, tables, fields, sections and page dynamics. Not to mention the problems of feature disparities. If the harmonized spec has two different implementation models for basic structures, interoeprability will suffer enormously. And interoperability is after all the prupose of the standardization effort. That brings us to a difficult compromise. Should OpenOffice compromise it's "innovative" features and methods in favor of greater interoperability with MSOffice and billions of binary documents? Let me see, 100 million OpenOffice installs vs. 550 MSOffice installs bound to workgroup-workflow business processes - many of which are critical to day to day business operations? Sun and IBM have provided the anser to this question. They are not about to compromise on OpenOffice innovation! They believe that since their applications are free, the cost of ODF mandated "rip out and replace" is adequately offset. Events in Massachusetts prove otherwise! On July 2nd, 2007, Sun delivered to Massachusetts the final version of their ODF plug-in for MSOffice. That night, after reviewing and testing the 135 critical documents, Massachusetts made a major change to their ETRM web site. They ammended the ETRM to fully recognize OOXML as an acceptable format standard going forward. The Massachusetts decision to overturn th
  • ...1 more annotation...
    • Gary Edwards
       
      The Burton Group did not recommend that ISO recognize OOXML as a standard! They pointed out that the marketplace is going to implement OOXML by default simply because it's impossible to implement ODF in situations where MSOffice dominates. ISO should not go down the slippery slope of recognizing application-platform-vendor specific standards. They already made that mistake with ODF, and recognizing OOXML is hardly the fix. What ISO should be doign is demanding that ODF fully conform with ISO Interoeprability Requirements, as identified in the May 2006 directive! Forget OOXML. Clean up ODF first.
  •  
    Correcto mundo! There should be only one standard to maximise interoeprability and functionality. But ODF is application specific to the way OpenOffice works. It was not designed from a clean slate. Nor was the original 2002 OpenOffice XML spec designed as an open source effort! Check the OOo source code if you doubt this claim. The ONLY contributors to Open Office XML were Sun employees! What the world needs is in fact a format standard designed to maximise interoperability and functionality. This requires a total application-platofrm-vendor independence that neither ODF or OOXML can claim. The only format that meets these requirements is the W3C's family of HTML-XML formats. These include advancing Compound Docuemnt Framework format components such as (X)HTML-5, CSS-3, XForms, SVG and SMiL.. The W3C's CDF does in fact meet the markeplace needs of a universal format that is open, unencumbered and totally application, platform and vendor independent. The only trick left for CDF is proving that legacy desktop applications can actually implement conversions from existing in-memory-binary-representations to CDF without loss of information.
Gary Edwards

Official Google Blog: Pagination comes to Google Docs - 0 views

  •  
    Although you need Chrome for the new Google Docs pagination feature, the key here is that gDocs now supports the CSS3 pagination module!   excerpt: Today, we're doing another first for web browsers by adding a classic word processing feature-pagination, the ability to see visual pages on your screen. We're also using pagination and some of Chrome's capabilities to improve how printing works in Google Docs. Native Printing: Pagination also changes what's possible with printing in modern browsers. We've worked closely with the Chrome team to implement a recent web standard, CSS3, so we can support a feature called native printing. Before, if you wanted to print your document we'd need to first convert it into a PDF, which you would then need to open and print yourself. With native printing, you can print directly from your browser and the printed document will always exactly match what you see on your screen.
Gary Edwards

ODF Civil War: Bulll Run - Suggested Changes on the Metadata proposal - OASIS ODF - 0 views

  • From our perspective it would be better to aim for doing the job in ODF 1.2, even if that requires delay. We will oppose ODF 1.2 at ISO unless the interoperability warts are cleaned up. What the market requires is no longer in doubt. See the slides linked above and further presentations linked from this page, < http://ec.europa.eu/idabc/en/document/6474/5935>. Substantial progress toward those goals would seem to be mandatory to maintain Europe's preference for a harmonized set of file formats that uses ODF to provide the common functionality. Delaying commencement of such work enhances the likelihood that governments will tire of waiting for ODF to become interoperable with MS Office and simply go with MOOXML. We may not be able to force Microsoft to participate in the harmonization work, but we will be in a far better position if we have done everything we can in aid of that interoperability without Microsoft's assistance. As the situation stands, we have what is known in the U.S. as a "Mexican stand-off," where neither side has taken a solitary step toward what Europe has requested. We have decided to do that work via a fork of ODF; it is up to this TC whether it wishes to cooperate in that effort.
  •  
    This is the famous marbux response to Sun regarding Sun's attempt to partially implement ODF 1.2 XML-RDF metadata.  It's a treasure.

    There is one problem with marbux's statement though.  We had decided long ago not to fork ODF even if the five iX "interoperability enhancement" proposals were refused by the OASIS ODF TC.   This assurance was provided to Massachusetts CIO Louis Gutierrez witht he the first ODF iX proposal submitted on July 12th, 2006.  Louis ended up signing off on three iX proposals before his resignation October 4th, 2006.

    The ODF iX enhancements were essential to saving ODF in Massachusetts.  Without them, there was no way our da Vinci plug-in could convert existing MSOffice documents and processes to ODF with the needed round trip fidelity.

    For nearly a year we tried to push through some semblance of the needed iX enhancements.  We also tried to push through a much needed Interoperability Framework, which will be critical to any ISO approval of ODF 1.2.

    Our critics are correct in that every iX effort was defeated, with Sun providing the primary opposition. 

    Still rather than fork ODF, we are simply going to move on. 

    On October 4th, 2006, all work on ODF da Vinci ended - not to be resumed unless and until we had the ODF iX enhancements we needed to crack the MSOffice bound workgroup-workflow business process barrier.

    In April of 2007, with our OASIS membership officially shredded by OASIS management, bleeding from the List Enhancement Proposal doonybrook, and totally defeated with our hope - the metadata XML-RDF work, we threw in the towel.

    Since then we've moved on to CDF, the W3C Compound Document format.  Incredibly, CDF is able to do what ODF can not.  With CDF we can solve the three primary problems confronting governments and MSOffice bound workgroups everywhere. 

    The challenge for these g
Gary Edwards

AlphaDog Barks Loudly: Why Can't You Guys Just Get Along and Solve MY MSOffice Problem!... - 0 views

  • First, let me say that I am a CIO in a small (20 employees but growing fast) financial services company. I am well aware of how locked-in I am getting with our MS-only shop. I am trying to see my way out of it, but this "ODF vs ODFF" is leaving me very confused and no one is working to clear the fog. I beg for all parties to really work towards some sort of defined understanding. I don't need cooperation. But, what I don't have is well-defined positions from all parties. As it is, I feel safer staying the course with MS right now, honestly. It's what I know vs the mystery of this "open cloud" and all the bellicose infighting. How's that for "in the trenches" data? I posted a comment on Andy's blog, and I will post the same comment here for your group (minor edits): I will admit to being very, very confused by all of this ODF vs ODFF posturing. I will try to put my current thoughts in short form, but it will be a muddled mess. I warned you! From what I gather, the OpenDocument Foundation (ODFF) is attempting to create more of an interop format for working against a background MS server stack (Exchange/Sharepoint). You worry that MS is further cementing their business lock-in by moving more and more companies into dependency on not only the client-side software but also the MS business stack that has finally evolved into a serious competitive set. At that level, and in your view, the "atomic unit" is the whole document. The encoded content is not of immediate concern. ODF is concerned with the actual document content, which ODFF is prepared to ignore. The "atomic unit" is the bits and parts in the document. They want to break the proprietary encodings that MS has that lock people into MSOffice. The stack is not of any immediate concern. So, unless I misunderstand either camp, ODF is first attacking the client end of the stack, and ODFF is attacking the backbone server end of the stack. The former wants to break the MSOffice monopoly by allowing people to escape those proprietary encodings, and the latter wants to prevent the dependency on server software like Exchange and Sharepoint by allowing MS documents to travel to other destinations than MS "server" products. Is this correct? I have yet to see anyone summarize the differences in any non-partisan way, so I am at a loss and not enough information is forthcoming for me to see what's what. The usual diatribe by people closer to the action is to go into the history of ODF or ODFF, talk about old slights and lost fights, and somehow try to pull at emotional heartstrings so as to gain mindshare. Gary's set of comments on this blog have that flavor. This is childish on both sides. Furthermore, the word "orthogonal" comes to mind. I often see people too busy arguing their POV, and not listening to others, when there is no real argument to keep making. It's apple-and-oranges. ODF vs ODFF seems like they are caught in this trap. Everyone wants to win an argument that has no possible win because the participants are not arguing about the same thing. Tell me: Why can't the two parties get along? I can see a "cooperative" that attacks the entire stack. Am I the only one seeing this? Am I wrong? If yes, what's the fundamental difference that prevents cooperation?
  •  
    AlphaDog When asked about the source of his incredible success, the hockey great Wayne Gretzky replied, "I skate to where the puck is going to be, not where it has been." You and i need to do the same. Let me state our position as this: The desktop office suite is where the puck has been. The Exchange/SharePoint Hub is where it's going to be. The E/S Hub is the core of an emerging Microsoft specific web platform which we've also called, the MS Stack. In this stack, MSOffice is relegated to the task of a rich client end user interface into the E/S Hub of business processes and collaborative computing connections. The rest of the MS Stack swirls like a galaxy of services around the E/S Hub. Key to Microsoft's web platform is the gradual movement of MSOffice bound business processes to the E/S Hub where they connect to the rest of the MS Stack. So what now you might ask? Some things to consider before we get down to brass tacks: ... There is a way to break the monopolists MSOffice desktop grip, but it's not a rip out and replace the desktop model. It's a beat them at the E/S Hub model that then opens up the desktop space. And opens it up totally. (this is a 3-5 year challenge though since it's a movement of currently bound business processes). ... It's all about the business processes. Focusing entirely on the file formats is to miss the big picture. ... The da Vinci group's position is this; we believe we can neutralize and re purpose MSOffice by converting in proce
Gary Edwards

Brian Jones: Open XML Formats : Mapping documents in the binary format (.doc; .xls; .pp... - 0 views

  • The second issue we had feedback on was an interest in the mapping from the binary formats into the Open XML formats. The thought here was that the most effective way to help people with this was to create an open source translation project to allow binary documents (.doc; .xls; .ppt) to be translated into Open XML. So we proposed the creation of a new open source project that would map a document written using the legacy binary formats to the Open XML formats. TC45 liked this suggestion, and here was the TC45 response to the national body comments: We believe that Interoperability between applications conforming to DIS 29500 is established at the Office Open XML-to- Office Open XML file construct level only.
    • Gary Edwards
       
      And here i was betting that the blueprints to the secret binaries would be released the weekend before the September 2nd, 2007 ISO vote on OOXML! Looks like Microsoft saved the move for when they really had to use it; jus tweeks before the February ISO Ballot Resolution Meetings set to resolve the Sept 2nd issues. The truth is that years of reverse engineering have depleted the value of keeping the binary blueprints secret. It's true that interoperability with MSOffice in the past was near entirely dependent on understanding the secret binaries. Today however, with the rapid emergence of the Exchange/SharePoint juggernaught, interop with MSOffice is no longer the core issue. Now we have to compete with E/S, and it is the E/S interfaces, protocols and document API's and dependencies tha tmust be reverse engineered. The E/S juggernaught is now surging to 70% or more of the market. These near monopoly levels of market penetration is game changing. One must reverse engineer or license the .NET libraries to crack the interop problem. And this time it's not just MSOffice. Today one must crack into the MS Stack whose core is tha tof MSOffice <> E/S. So why not release the secret binary blueprints? If that's the cost of getting the application, platform and vendor specific OOXML through ISO, then it's a small price to pay for your own international standard.
  •  
    Well well well. We knew that IBM had access to the secret binary blueprints back in 2006. Now we know that Sun ALSO had access!
    And why is this important? In June of 2006, Massachusetts CIO Louis Gutierrez asked the OpenDocument Foundation's da Vinci Group to work with IBM on developing the da Vinci ODF plug-in clone of Microsoft's OOXML Compatibility Pack plug-in. When we met with IBM they were insistent that the only way OASIS ODF could establish sufficient compatibility with MSOffice and the billions of binary documents would be to have the secret blueprints open.
    Even after we explained to IBM that da Vinci uses the same internal conversion process that the OOXML plug-in used to convert binaries, IBM continued to insist that opening up the secret binaries was a primary objective of the OASIS ODF community.
    For sure this was important to IBM and Sun, but the secret binaries were of no use to us. da Vinci didn't need them. What da Vinci needed instead was a subset of ODF designed for the conversion of those billions of binary documents! A need opposed by Sun.
    Sun of course would spend the next year developing their own ODF plug-in for MSOffice. But here's the thing: it turns out that Sun had complete access to the secret binary blueprints dating back to 2006!!!!!!
    So even though IBM and Sun have had access to the blueprints since 2006, they have been unable to provide effective conversions to ODF!
    This validates a point the da Vinci group has been trying to make since June of 2006: the problem of perfecting a high fidelity conversion between the billions of binaries and ODF has nothing to do with access to the secret binary blueprints. The real issue is that ODF was NOT designed for the conversion of those binary documents.
    It is true that one could eXtend ODF to achieve the needed compatibility. But one has to be very careful before taking this ro
Gary Edwards

Q&amp;A: Calif. CIO Steers Clear of Ideology on File Formats - 0 views

  • We’re trying to view it as a straight business decision. What are the costs associated with one approach over another? Does it serve all of our business needs? If it doesn’t serve a business need, how do we satisfy that business need? We’re trying to view this just as a plain-vanilla, nonpartisan, nonideological issue.
  •  
    A mus tread.  Carol Sliwa of ComputerWorld intervies Clark Kelso, California CIO.  ODF is the main issue, with clark casting all his answers in the context of business decisions.  Carol o fcourse is asking the best questions of any journalist alive.

    Keep in mind that ComputerWorld and the Boston globe filed for the Freedom of Information Act to be invoked in Massachusetts.  They got access to all the eMail, documnetation, and conferencing notes concerning ODF  and Microsoft.  Carol's interview with Louis Gutierrez last week was filled with the same hard questions Clark Kelso fielded so deftly.

    The "committee" Clark Kelso has set up to look at these issues is headed by Bill Welty, the CIO of the California Air Resources Board.  Bill is a long time opensource - Linux guy, but will be the firs tto admit that Microsoft is the only vendor providing a means of getting everything inot XML.  And that's the heart of any SOA strategy, "First, get everything into XML".

    With a 500 million MSOffice desktop bound business process headstart, Microsoft has the extreme advantage in this much needed migration to XML. 

    They now have their own proprietary application and platform bound version of XML; MOOXML (Microsoft OfficeOpenXML) heading for international standardization at ISO. 

    They now have their XML Hub in place; the Exchange4/SharePoint Hub.  This is also an essential part of any SOA strategy.  You've got to have an XML Hub where the XML information streams and service connection to legacy black box systems can be piped into, managed and resolved.  The XML must also provide an end user interface to these information flows.  One that converges and integrates information, documents, data, and workflows into an easy to manage and participate in interface.  The E/S Hub excells at this because it covers the fundamentals of eMail, messaging, portal, calendar, scheduling, c
Gary Edwards

FAA May Ditch Microsoft's Windows Vista And Office For Google And Linux Combo - Technol... - 0 views

  • Bowen's compatibility concerns, combined with the potential cost of upgrading the FAA's 45,000 workers to Microsoft's next-generation desktop environment, could make the moratorium permanent. "We're considering the cost to deploy [Windows Vista] in our organization. But when you consider the incompatibilities, and the fact that we haven't seen much in the way of documented business value, we felt that we needed to do a lot more study," said Bowen. Because of Google Apps' sudden entry into the desktop productivity market
  •  
    The FAA issues their "NO ViSTA" mandate, hinting that it might be permanent if they can come up with MSOffice alternatives.  They are looking at Google Apps!

    Okay, so plan B does have legs.  The recent failure of ISO/IEC to stand up to the recidivist reprobate from Redmond is having repercussions.  Who would have ever thought ISO would fold so quickly without ceremony?  One day there are 20 out of 30 JTCS1 national bodies (NB's) objecting to Micrsoft's proprietary XML proposal, the MOOX Ecma 376 specfication, and the next ISO is approving without comment the placing of MOOX into the ISO fast track where approval is near certain.  With fast track, the technical objections and contradictions are assumed to be the provence of Ecma, and not the JTCS1 experts group.

    Apparently the USA Federal Government divisions had a plan B contingency for just such a case.  And why not?  Microsoft was able to purchase a presidential pardon for their illegal anti trust violations.  If they can do that, what's to stop them from purchasing an International Standard?  Piece of cake!

    But Google Apps?  And i say that as one who uses Google Docs every day.

    The problem of migrating away from MSOffice and MOOX to ODF or some other "open" XML portable file format is that there are two barriers one must cross.

    The first barrier is that of converting the billions of MS binary docuemnts into ODF XML. 

    The second is that of replacing the MSOffice bound business processes that drive critical day to day business operabions. 

    Google Apps is fine for documents that benefit from collaborative computing activities.  But there is no way one can migrate MSOffice bound business processes - the workgroup-worflow documents to Google Apps.  For one thing Google Apps is unable to facillitate important issues like XForms.  Nor can they round trip an ODF document with the needed fidelity a
Gary Edwards

Sun Supports OOXML as an ISO Standard? - 0 views

  • Sun Microsystems Inc., largely considered an avowed opponent of Open XML because of its own development and support for the competing, ODF-based StarOffice suite, found itself in the unexpected position of stating its support for ratifying Open XML -- albeit after some changes in the proposal are made.
  •  
    Quote: Sun Microsystems Inc., largely considered an avowed opponent of Open XML because of its own development and support for the competing, ODF-based StarOffice suite, found itself in the unexpected position of stating its support for ratifying Open XML -- albeit after some changes in the proposal are made. "We wish to make it completely clear that we support DIS 29500 becoming an ISO Standard and are in complete agreement with its stated purposes of enabling interoperability among different implementations and providing interoperable access to the legacy of Microsoft Office documents," Jon Bosak, a Sun representative to V1, wrote in an e-mail to other committee members over the weekend. "Sun voted No on Approval because it is our expert finding, based on the analysis so far accomplished in V1, that DIS 29500 as presently written is technically incapable of achieving those goals, not because we disagree with the goals or are opposed to an ISO Standard that would enable them." Sun "found itself in the unexpected position of stating its support for ratifying OOXML"?  What???? This is the official position of Sun?

    For the near five years that i have been a member of the OASIS ODF TC, Sun has opposed
Gary Edwards

Can't We All Just Get Along? - 0 views

  •  
    Another call for the "convergence" of ODF and MS-OOXML, this time from the government technology magazine, GCN.com.

    IMHO, there is a very steep technical barrier to both the harmonization and/or convergence of ODF and OOXML. The problem is that these file formats are application specific and bound respectively to OpenOffice and MSOffice feature sets and implementation models. The only way to perfect a harmonization or convergence file format effort is to dramatically change the reference applications.

    With over 500 million MSOffice workgroup bound desktops in the world, changing that suite of applications is likely to break business processes with a global disruption factor that is simply unacceptable. OpenOffice on the other hand could better sustain such the needed layout engine changes, but estimates it will take 3-5 years to accomplish this.

    Sun has often stated at the OASIS ODF TC (technical committee) that OpenOffice will not be bound and limited by having to mirror MSOffice features and implementation models. These arguments are often called application innovation rights.

    In the past year alone, there have been no less than five ODF iX "interoperability enhancement" proposals submitted to the OASIS ODF TC members for discussion. The iX proposals are designed to solve the problem of high fidelity "round trip" conversion of MSOffice binary and xml documents with OpenOffice ODF documents.

    Sadly, Sun and the other ODF application vendors fought and thoroughly defeated every aspect of these proposals even though the first three iX proposals were signed off on by Massachusetts ITD, and considered vital to the successful implementation of ODF there. ODF of course proved impossible to implement in Massachusetts. And without the iX interoperability enhancements, it is impossible for ODF plug-ins for MSOffice to perfect the high fidelity "round trip" conversion of existing doc
Gary Edwards

Microsoft Will Support ODF! But Only If It Doesn't 'Restrict Choice Among Formats' - 0 views

  • By Marbux posted Jun 19, 2007 - 3:16 PM Asellus sez: "I will not say OOXML is easy to implement, but saying ODF is easier to implement just by looking at the ISO specification is a fallacy." I shouldn't respond to trolls, but I will this time. Asellus is simply wrong. Large hunks of Ecma 376 are simply undocumented. And what's more, absolutely no vendor has a featureful app that writes to that format. Not even Microsoft. There's a myth that Ecma 376 is the same as the Office Open XML used by Microsoft. It is not. I've spend a few hundred hours comparing the Ecma 376 specification (the version of OOXML being considered at ISO) to the information about the undocumented APIs used by MS Office 2007 that recently sprung loose in litigation. See http://www.groklaw.net/p...Rpt_Andrew_Schulman.pdf Each of those APIs *should* have corresponding metadata in the formats, but are not in the Ecma 376 specification.
  •  
    Incredible comment by Marbux!  With one swipe he takes out both Ecma 376 and ODF. 

    Microsoft has written a letter claiming that they will support ODF in MSOffice, but only if ISO approves Ecma 376 as a second office suite XML file format standard.  ODF was approved by ISO nearly a year ago.

    Criticizing Ecma 376 is easy.  It was designed to meet the needs of  a proprietary application, MSOffice, and, to meet the needs of the emerging MS Vista Stack of applications that spans desktop to server to device to web platforms.  It's filled with MS platform dependencies that make it impossibly non interoperable with anything not fully compliant with Microsoft owned API's.

    Criticizing ODF however is another matter entirely.  Marbux points to the extremely poor ODF interoperability record.  If MOOXML (not Ecma 376 - since that is a read only file format) is tied to vendor-application specific MSOffice, then ODF is similarly tied to the many vendor versions of OpenOffice/StarOffice.

    The "many vendor" aspect of OpenOffice is somewhat of a scam.  The interoperability that ODF shares across Novell Office, StarOffice, IBM WorkPlace, Red Office, and NeoOffice is entirely based on the fact that these iterations of OpenOffice are based on a single code base controlled 100% by Sun.  Which is exactly the case with MSOffice.  With this important exception - MOOXML (not Ecma 376) is interoperable across the entire Vista Stack!

    The Vista Stack is comprised of Exchange/SharePoint, MS Live, MS Dynamics, MS SQL Server, MS Internet Server, MS Grove, MS Collaboration Server, and MS Active Directory.   Behind these applications sits a an important foundation of shared assets: MOOXML, Smart Documents, XAML and .NET 3.0.  All of which can be worked into third party, Stack dependent applications through the Visual Studio .NET IDE.

    Here are some thoughts i wou
Gary Edwards

Brendan's Roadmap Updates: My @media Ajax Keynote - 0 views

  • Standards often are made by insiders, established players, vendors with something to sell and so something to lose. Web standards bodies organized as pay-to-play consortia thus leave out developers and users, although vendors of course claim to represent everyone fully and fairly. I've worked within such bodies and continue to try to make progress in them, but I've come to the conclusion that open standards need radically open standardization processes. They don't need too many cooks, of course; they need some great chefs who work well together as a small group. Beyond this, open standards need transparency. Transparency helps developers and other categories of "users" see what is going on, give corrective feedback early and often, and if necessary try errant vendors in the court of public opinion.
    • Gary Edwards
       
      Brendan's comment about the open standards process and the control big vendors have over that process is exactly right. The standards contsortia are pay to play orgs controlled entirely by big vendors. OASIS and the OpenDocuemnt Technical Committee are not exceptions to this problematic and troublesome truth.
      The First Law of the Internet is that Interoperability trumps everything - including innovation. The problem with vendor driven open standards is that innovation ontinually trumps interoperability. So much so that interop is pretty much an after thought - as is the case with ODF and OOXML!
      The future of the Open Web will depend on open source communities banding together with governemnts and user groups to insist on the First Law of the Internet: Interoeprability. If they don't, vendors will succeed in creating slow moving web standards designed to service their product lines. Vendor product lines compete and are differentiated by innovative features. Interoeprability on the other hand is driven by sameness - the sharing of critical features. Driving innovation down into the interop layer is what the open standards process should be about. But as long as big vendors control that process, those innovations will reside at the higher level of product differentiation. A level tha tcontinues to break interoperability!
1 - 20 of 126 Next › Last »
Showing 20 items per page