Skip to main content

Home/ Document Wars/ Group items matching "2009" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
50More

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
53More

Doug Mahugh : Standards-Based Interoperability - 0 views

  • Standards-Based Interoperability
  • 05 June 09
  • Interoperability without Standards
  • ...46 more annotations...
  • First, let’s consider how software interoperability works when it is not standards-based. Consider the various ways that four applications can share data, as shown in the diagram to the right.  There are six connections between these four applications, and each connection can be traversed in either direction, so there are 12 total types of interoperability involved.
  • As the number of applications increases, this complexity grows rapidly.  Double the number of applications to 8 total, and there will be 56 types of interoperability between them:
  • through standards maintenance, transparency of implementation details, and collaborative interoperability testing.
    • Graham Perrin
       
      Issues relating to CalDAV are well addressed in these ways.
  • Here’s where those workarounds will need to be implemented: Note the complexity of this diagram.
  • In the real world, interoperability is almost never achieved in this way.  Standards-based interoperability is much better approach for everyone involved,
  • whether that standard is an open one such as ODF (IS26300)
  • or a de-facto standard set by one popular implementation.
  • or Open XML (IS29500)
  • The core premise of open standards-based interoperability is this:
  • each application implements the published standard as written, and this provides a baseline for delivering interoperability.
  • the existence of a standard addresses many of the issues involved, and the other issues can be addressed
  • In the standards-based scenario, the standard itself is the central mechanism for enabling interoperability between implementations: This diagram is much simpler
  • there is no question that users of other products are massively surprised by
  • How this all applies to Office 2007 SP2 I covered last summer the set of guiding principles that we used to guide the work we did to support ODF in Office 2007 SP2.
  • applied in a specific order
  • I’d like to revisit the top two guiding principles
  • Guiding Principle #1: Adhere to the ODF 1.1 Standard
  • Guiding Principle #2: Be Predictable
  • Being predictable is also known as the principle of least astonishment.
  • What about Bugs and Deviations? Of course, the existence of a published standard doesn’t prevent interoperability bugs from occurring.
  • deviations from the requirements
  • different interpretations
  • Our approach to the transparency issue has been to document the details of our implementation through published implementer notes.
  • Interoperability Testing The final piece of the puzzle is hands-on testing
  • What else would you like to know about how Office approaches document format interoperability?
  • a standard (evolved and improved as reality demands) is the proper foundation for resolving interoperabilty
  • All complex software has bugs, and some bugs can present significant challenges to interoperability.  Let’s consider the case that 3 of the 4 applications have bugs that affect interoperability, as shown in the diagram to the right.
  • (1) their spreadsheets having their formulas lost when interchanged with Excel 2007
  • (2) not being able to handle the formulase received in Excel 2007's ODF output.
  • I am creating my own fantasy about the state of affairs
    • Graham Perrin
       
      :-)
  • it is far too early to declare it to be unsuccessful
  • I cannot fault the Microsoft approach as incorrect
  • I was at the year-ago DII meeting where the guiding principles were announced and their application to spreadsheet formulas described.  I applauded the principles and understood the reasoning for formulas.
  • How this would impact various groups of users and non-users (who still want to interoperate) of Office 2007 did not surface in my consciousness.
  • there is NO published standard for ODF spreadsheet formulas yet.
  • Nor is there any de-facto standard that everyone agrees on.
  • the “spaghetti diagram" method, with all of the complexity and risk of bugs that entails
  • No implementer we know of has attempted that
  • In the case of spreadsheet formulas, help is on the way -- OpenFormula is under development for use with ODF 1.2.
  • I’d like to keep this thread on-topic
  • I appreciate the post, very good
  • Visually I would rather frame it in terms of convergence, a spiral.
  • and user satisfaction.
  • I doubt someone would ever find a magic bullet to interoperability
  • New Comments to this post are disabled
    • Graham Perrin
       
      Hurrah!
  • © 2009 Microsoft Corporation
  •  
    Diagrams here are eye-catching.
2More

OpenOfficers pitch Oracle on life after Sun * The Register - 0 views

  • John McCreesh, OpenOffice's head of marketing, is veering towards independence, though. He said separately he felt the "right model" is for an independent legal entity to own the trademarks and have joint copyright of the code, with its own finance and governance.
    • Alex Brown
       
      And the key word here is probably "finance".
9More

Groklaw - When Would You Use OOXML and When ODF? -- What is OOXML For? - 0 views

  • The legacy formats are just popped into an OOXML wrapper
    • Alex Brown
       
      Funny how often this old canard is brought out. Do people really belive it?
    • Jesper Lund Stocholm
       
      I actually think is is - to some extent - true. Apart from stuff like DrawingML, CustomML etc, OOXML is a transformation of the binary stuff and hence in essence the same document format. "Someone" told me the other day that he had knowledge of a company that didn't use the "xml-ness" of OOXMLto manipulate OOXML-files but simply considered them TEXT-files. They could do this because OOXML is very close to the binary formats.
    • Alex Brown
       
      True, but the stuff inside is XML -- I think there's a widespread view that OOXML is a lot of lightly wrapped BLOBs
    • Jesper Lund Stocholm
       
      Ok - you are possibly correct. Somehow content in a file called printerSettings.bin seem to attract higher disturbance than base64-encoded, binary attribute values with attribute name "printerSettings"
    • Jesper Lund Stocholm
       
      Actually, I think the phrase someone coined that "OOXML is just the binary document formats dressed up in angle brackets" fits just fint :o)
  • Whoa, whoa, whoa! - Authored by: Anonymous on Friday, May 01 2009 @ 02:21 AM EDT
  • Whoa, whoa, whoa! - Authored by: Anonymous on Friday, May 01 2009 @ 03:17 AM EDT
  •  
    It fits just fine for most of the spec but there are also major chunks that include descriptive element and attribute names, for example, the compatibility markup volume. My sense is that these are areas where new features were introduced in Office 2007. But they kind of fly in the face of the Microsoft claims back when that the abbreviated markup was deliberately chosen to maximize execution speed. If so, why isn't all the markup in abbreviated form?
8More

Moved by Freedom - Powered by Standards » Blog Archive » News of the Weird (A... - 0 views

  • I just don’t get it
    • Alex Brown
       
      Neither do I: but then this is not the first signal of a less than unanimous attitude towards document formats from the Old Firm.
  • The Durban 2 conference in Geneva makes me think of a bizarre mashup of the first Durban conference and what I experienced at the OOXML BRM
    • Alex Brown
       
      Not the first time somebody seems to have got confused between issues of tynanny and totalitarianism, and ... document formats. What price perspective?
    • Jesper Lund Stocholm
       
      Actually I didn't know Charles participated in the BRM?
    • Alex Brown
       
      He didn't - this is something that Andy Updegrove published at the tim too. What price reality?
  • Alex is right. National transposition is a procedural relic. We should get the specs right out of software vendors and just skip this standardization crap that only justifies to pay useless consultants whose status is construed as some kind of impartial judge. This kind of failed processes have led us to believe that standards and norms could be somehow trusted; as it unfortunately turns out, it stops to be true when strongly applied pressure by one large private monopoly meets the weak morals of the ones in charge of ensuring the process is being duly respected. Thank you Alex, for spelling out the truth. Your lack of impartiality and your strange behaviour during the OOXML standardization process have clarified how poorly qualified you are at patronizing others and lecturing on the ISO and other standards bodies’ processes. I wish you good luck for your next job at Microsoft.
    • Alex Brown
       
      Ah, the sound of a dummy being being spat out ...
1More

The Education of Gary Edwards - Rick Jelliffe on O'Reilly Broadcast - 0 views

  •  
    I wonder how i missed this? Incredibly, i have my own biographer and i didn't know it! The date line is September, 2008, I had turned off all my ODF-OOXML-OASIS searches and blog feeds back in October of 2007 when we moved the da Vinci plug-in to HTML+ using the W3C CDF model. Is it appropriate to send flowers to your secret biographer? Maybe i'll find some time and update his work. The gap between October 2007 and April of 2009 is filled with adventure and wonder. And WebKit!

    "....One of the more interesting characters in the recent standards battles has been Gary Edwards: he was a member of the original ODF TC in 2002 which oversaw the creation of ODF 1.0 in 2005, but gradually became more concerned about large vendor dominance of the ODF TC frustrating what he saw as critical improvements in the area of interoperability. This compromised the ability of ODF to act as a universal format."

    "....Edwards increasingly came to believe that the battleground had shifted, with the SharePoint threat increasingly needing to be the focus of open standards and FOSS attention, not just the standalone desktop applications: I think Edwards tends to see Office Open XML as a stalking horse for Microsoft to get its foot back in the door for back-end systems....."

    "....Edwards and some colleagues split with some acrimony from the ODF effort in 2007, and subsequently see W3C's Compound Document Formats (CDF) as holding the best promise for interoperability. Edwards' public comments are an interesting reflection of an person evolving their opinion in the light of experience, events and changing opportunities...."

    ".... I have put together some interesting quotes from him which, I hope, fairly bring out some of the themes I see. As always, read the source to get more info: ..... "

9More

Interoperability vs Homogeneity « Arnaud's Open blog - 1 views

  • Interoperability vs Homogeneity
  • leaked updated document of the European Interoperability Framework (EIF)
  • taking back what could be considered one of the most advanced features of the previous document
  • ...5 more annotations...
  • how could “homogeneity” possibly qualify has a way of obtaining “interoperability”?
  • why would the EU endorse the notion of having everybody select one specific solution or system? Isn’t that in total contradiction with its very goal?
  • I seriously hope the EU realizes how misguided this move was and takes it back.
  • November 10, 2009
  • Arnaud Le Hors
4More

EUROPA - Press Releases - Antitrust: Commission opens proceedings against MathWorks - 1 views

  • Brussels, 1 March 2012 - The European Commission has opened a formal investigation to assess whether The MathWorks Inc., a U.S.-based software company, has distorted competition in the market for the design of commercial control systems by preventing competitors from achieving interoperability with its products. The Commission will investigate whether by allegedly refusing to provide a competitor with end-user licences and interoperability information, the company has breached EU antitrust rules that prohibit the abuse of a dominant position. The opening of proceedings means that the Commission will examine the case as a matter of priority. It does not prejudge the outcome of the investigation. The investigation follows a complaint alleging that MathWorks had refused to provide a competitor with end-user software licences and accompanying interoperability information for its flagship products "Simulink" and "MATLAB", thereby preventing it from lawfully reverse-engineering in order to achieve interoperability with these two products.
  • As in the Microsoft case (see IP/04/382 and MEMO/04/70 and MEMO/07/359), the issue of software interoperability is central to this investigation. The Commission's investigation will focus on whether MathWorks' behaviour has prevented competitors from achieving interoperability with its widely used products and thereby hindered competition in breach of Article 102 TFEU. In this context, it is recalled that the European Directive 2009/24/EC on the legal protection of computer programmes also aims to foster interoperability by allowing for reverse-engineering for interoperability purposes provided that the software at issue was lawfully acquired.
  • Background MathWorks' "Simulink" and "MATLAB" software products are widely used for designing and simulating control systems. Control systems are deployed in many innovative industries such as in cruise control or anti-lock braking systems (ABS) for cars. Article 102 TFEU prohibits the abuse of a dominant position which may affect trade and prevent or restrict competition. The implementation of this provision is defined in the Antitrust Regulation (Council Regulation No 1/2003) which can be applied by the Commission and by the national competition authorities of EU Member States. The Commission has informed MathWorks and the Member States' competition authorities that it has formally opened proceedings in this case.
  •  
    Commission v. MIcrosoft Redux.
3More

An Antic Disposition: The Final OOXML Update: Part I - 0 views

  • In any case, my current estimate is for us to send ODF 1.2 out for public review later this year and then to have a vote to approve it as an OASIS Standard in Q1 2010.
    • Alex Brown
       
      What are the odds?!
    • Jesper Lund Stocholm
       
      well, all we can do is to keep our fingers crossed
4More

Microsoft Watch - Corporate - Microsoft's Stunning Court Defeat - 0 views

  • "The Court considers that the Commission was correct to conclude that the work group server operating systems of Microsoft's competitors must be able to interoperate with Windows domain architecture on an equal footing with Windows operating systems if they are to be capable of being marketed viably. The absence of such interoperability has the effect of reinforcing Microsoft's competitive position on the market and creates a risk that competition will be eliminated."
  • Here, U.S. oversight of Microsoft will continue until at least November 2009, largely because of server protocol licensing. The so-called "California group" of states—those that didn't settle the U.S. antitrust case—and other parties will likely ask the court here to align the two disclosure programs, extending the ruling's impact well beyond Europe.
    • Gary Edwards
       
      I wonder if this is correct? My understanding is that the California Group will be brushed aside by the Feds?
  •  
    Microsoft Watch Joe Wilcox is on the job.  This particular hgihlighted quote speaks volumes.  The USA anti trust settlement famously allowed Microsoft to commercialize interoperability through expensive licenses -  $8 Million per year for just the basic package.

    It looks like the EU would force those interoperability API's out into the open.  I wonder how this position will impact the November 12 th hearing on lifting the USA anti trust oversight?  We have the EU saying the monopolist is illegally maintaining their monopoly through various interop barricades.  And, the USA about to declare that the interop barricades no longer exists, therefore, the monopolist should be free to wreck havoc. 

    The stage looks set for a vey dramatic final act.

18More

Microsoft Watch Finally Gets it - It's the Business Applications!- Obla De OBA Da - 0 views

  • To be fair, Microsoft seeks to solve real world problems with respect to helping customers glean more value from their information. But the approach depends on enterprises adopting an end-to-end Microsoft stack—vertically from desktop to server and horizontally across desktop and server products. The development glue is .NET Framework, while the informational glue is OOXML.
    • Gary Edwards
       
      OOXML is the transport - a portable XML document model where the "document" is the interface into content/data/ and media streaming.

      The binding model for OOXML is "Smart Documents", and it is proprietary!

      Smart Documents is how data, streaming media, scripting-routing-workflow intelligence and metadata is added to any document object.

      Think of the ODF binding model using XForms, XML/RDF and RDFA metadata. One could even use Jabber XMP as a binding model, which is how we did the Comcast SOA based Sales and Inventory Management System prototype.

      Interestingly, Smart Documents is based on pre written widgets that can simply be dragged, dropped and bound to any document object. The Infopath applicaiton provides a highly visual means for end users to build intelligent self routing forms. But Visual Studio .NET, which was released with MSOffice 2007 in December of 2006. makes it very easy for application and line of business integration developers to implement very advanced data binding using the Smart Document widgets.

      I would also go as far to say that what separates MSOOXML from Ecma 376 is going to be primarily Smart Documents.

       Yes, there are .NET Framework Libraries and Vista Stack dependencies like XAML that will also provide a proprietary "Vista Stack" only barrier to interoperability, but Smart Documents is a killer.

      One company that will be particularly hurt by Smart Documents is Google. The reason is that the business value of Google Search is based on using advanced and closely held proprietary algorithms to provide metadata structure for unstrucutred documents.

      This was great for a world awash in unstructured documents. By moving the "XML" structuring of documents down to the author - workgroup - workflow application level though, the world will soon enough be awash in highly structured documents that have end user metadata defining document objects and
  • Microsoft seeks to create sales pull along the vertical stack between the desktop and server.
    • Gary Edwards
       
      The vertical stack is actually desktop - server - device - web based.  The idea of a portable XML document is that it must be able to transition across the converged application space of this sweeping stack model.

      Note that ODF is intentionally limited to the desktop by it's OASIS Charter statement.  One of the primary failings of ODF is that it is not able to be fully implemented in this converged space.  OOXML on the other hand was created exactly for this purpose!

      So ODF is limited to the desktop, and remains tightly bound to OpenOffice feature sets.  OOXML differs in that it is tightly bound to the Vista Stack.

      So where is an Open Stack model to turn to?

      Good question, and one that will come to haunt us for years to come.  Because ODF cannot move into the converged space of desktop to server to device to the web information systems connected through portable docuemnt/data transport, it is unfit as a candidate for Universal File Format.

      OOXML is unfi as a UFF becuase it is application - platform and vendor bound.

      For those of us who believe in an open and unencumbered universal file format, it's back to the drawing board.

      XHTML+ (XHTML + CSS3 + RDF) is looking very good.  The challenge is proving that we can build plugins for MSOffice and OpenOffice that can fully implement XHTML+.  Can we conver the billions of binary legacy documents and existing MSOffice bound business processes to XHTML+?

      I think so.  But we can't be sure until the da Vinci proves this conclusively.

      One thign to keep in mind though.  The internal plugins have already shown that it is possible to do multiple file formats.  OOXML, ODF, and XML encoded RTF all have been shown to work, and do so with a level of two way conversion fidelity demanded by existing business processes.

      So why not try it with XHTML+, or ODEF (the eXtended version of ODF en
  • Microsoft's major XML-based format development priority was backward compatibility with its proprietary Office binary file formats.
    • Gary Edwards
       
      This backwards compatibility with the existing binary file formats isn't the big deal Micrsoft makes it out to be.  ODF 1.0 includes a "Conformance Clause", (Section 1.5) that was designed and included in the specification exactly so that the billions of binary legacy documents could be converted into ODF XML.

      The problem with the ODF Conformance Clause is that the leading ODF application, OpenOffice,  does not fully support and implement the Conformance Clause. 

      The only foreign elements supported by OpenOffice are paragraphs and text spans.  Critically important structural document characteristics such as lists, fields, tables, sections and page breaks are not supported!

      This leads to a serious drop in conversion fidelity wherever MS binaries are converted to OpenOffice ODF.

      Note that OpenOffice ODF is very different from MSOffice ODF, as implemented by internal conversion plugins like da Vinci.  KOffice ODF and Googel Docs ODF are all different ODF implementations.  Because there are so many different ways to implement ODF, and still have "conforming" ODF documents, there is much truth to the statement that ODF has zero interoperabiltiy.

      It's also true that OOXML has optional implementation areas.  With ODF we call these "optional" implementation areas "interoperabiltiy break points" because this is exactly where the document exchange  presentation fidelity breaks down, leaving the dominant market ODF applicaiton as the only means of sustaining interoperabiltiy.

      With OOXML, the entire Vista Stack - Win32 dependency layer is "optional".  No doubt, all MSOffice - Exchange/SharePoint Hub applications will implement the full sweep of proprietary dependencies.    This includes the legacy Win32 API dependencies (like VML, EMF, EMF +), and the emerging Vista Stack dependencies that include Smart Documents, XAML, .NET 3.0 Libraries, and DrawingML.

      MSOffice 2007 i
  • ...6 more annotations...
  • Microsoft's backwards compatibility priority means the company made XML-based format decisions that compromise the open objectives of XML. Open Office XML is neither open nor XML.
    • Gary Edwards
       
      True, but a tricky statement given that the proprietary OOXML implementation is "optional".  It is theoretically possible to implement Ecma 376 without the prorpietary dependencies of MSOffice - Exchange/SharePoint Hub - Vista Stack "OOXML".

      In fact, this was first demonstrated by the legendary document processing - plugin architecture expert, Florian Reuter.

      Florian has the unique distinction of being the primary architect for two major plugins: the da Vinci ODF plugin for MSOffice, and, the Novell OOXML Translator plugin for OpenOffice!

      It is the Novell OOXML Translator Plugin for OpenOffice that first demonstrated that Ecma 376 could be cleanly implemented without the MSOffice application-platform-vendor specific dependencies we find in every MSOffice OOXML document.

      So while Joe is technically correct here, that OOXML is neither open nor XML, there is a caveat.  For 95% of all desktops and near 100% of all desktops in a workgroup, Joe's statment holds true.  For all practical concerns, that's enough.  For Microsoft's vaunted marketing spin machine though, they will make it sound as though OOXML is actually open and application-platform-vendor independent.


  • Microsoft got there first to protect Office.
    • Gary Edwards
       
      No. I disagree. Microsoft needs to move to XML structured documents regardless of what others are doing. The binary document model is simply unable to be useful to any desktop- to server- to device- to the web- transport!

      Many wonder what Microsoft's SOA strategy is. Well, it's this: the Vista Stack based on OOXML-Smart Documents-.NET.

      The thing is, Microsoft could not afford to market a SOA solution until all the proprietary solutions of the Vista Stack were in place.

      The Vista Stack looks like this:

      ..... The core :: MSOffice <> OOXML <> IE <> The Exchange/SharePoint Hub

      ..... The services :: E/S HUb <> MS SQL Server <> MS Dynamics <> MS Live <> MS Active Directory Server <> MSOffice RC Front End

      The key to the stack is the OOXML-Smart Documents capture of EXISTING MSOffice bound business processes and documents.

      The trick for Microsoft is to migrate these existing business processes and documents to the E/S Hub where line of business developers can re engineer aging desktop LOB apps.

      The productivity gains that can be had through this migration to the E/S Hub are extraordinary.

      A little over a year ago an E/S Hub verticle market application called "Agent Achieve" came out for the real estate industry. AA competed against a legacy of twenty years of contact management based - MLS data connected desktop shrinkware applications. (MLS-Multiple Listing Service)

      These traditional desktop client/server productivity apps defined the real estate business process as far as it could be said to be "digital".  For the most part, the real estate transaction industry remains a paper driven process. The desktop stuff was only useful for managing clients and lead prospecting. No one could crack the electronic documents - electonic business transaction model.  This will no doubt change with the emer
  • Microsoft can offer businesses many of the informational sharing and mining benefits associated with the markup language while leveraging Office and supporting desktop and server products as the primary consumption conduit.
    • Gary Edwards
       
      Okay, now Joe has the Micrsoft SOA bull by the horns.  Why doesn't he wrestle the monster down?
  • By adapting XML
    • Gary Edwards
       
      The requirements of these E/S Hub systems are XP, XP MSOffice 2003 Professional, Exchange Server with OWL (Outlook on the Web) , SharePoint Server, Active Directory Server, and at least four MS SQL Servers!

      In Arpil of 2006, Microsoft issued a harsh and sudden End-of-Life for all Windows 2000 - MSOffice 2000 systems in the real estate industry (although many industries were similarly impacted). What happened is that on a Friday afternoon, just prior to a big open house weekend, Microsoft issued a security patch for all Exchange systems. Once the patch was installed, end users needed IE 7.0 to connect to the Exchange Server Systems.

      Since there is no IE 7.0 made for Windows 2000, those users relying on E/S Hub applications, which was the entire industry, suddenly found themselves disconnected and near out of business.

      Amazingly, not a single user complained! Rather than getting pissed at Microsoft for the sudden and very disruptive EOL, the real estate users simply ran out to buy new XP-MSOffice 2003 systems. It was all done under the rational that to be competitive, you have to keep up with technology systems.

      Amazing. But it also goes to show how powerfully productive the E/S Hub applications can be. This wouldn't have happened if the E/S Hub applications didn't have a very high productivity value.

      When we visited Massachusetts in June of 2006, to demonstrate and test the da Vinci ODF plugin for MSOffice, we found them purchasing en mass E/S Hubs! These are ODF killers! Yet Microsoft sales people had convinced Massachusetts ITD that Exchange/SahrePoint was a simple to use eMail-calendar-portal system. Not a threat to anyone!

      The truth is that in the E/S Hub ecosystem, OOXML is THE TRANSPORT. ODF is a poor, second class attachment of no use at the application - document processing chain level.

      Even if Massachusetts had mandated ODF, they were only one E/S Hub Court Doc
  • Microsoft will vie for the whole business software stack, a strategy that I believe will be indisputable by early 2009 at the latest.
    • Gary Edwards
       
      Finally, someone who understands the grand strategy of levergaing the desktop monopoly into the converged space of server, device and web information systems.

      What Joe isn't watching is the way the Exchange/SharePoint Server connects to MS SQL Server, Active Directory Server, MS LIve and MS Dynamics.

      Also, Joe does not see the connection between OOXML as the portable XML document/data transport, and the insidiously proprietary Smart Documents metadata - data binding system that totally separates MSOOXML from Ecma 376 OOXML!
  • I'm convinced that Office as a platform is an eventual dead end. But Microsoft is going to lead lots of customers and partners down that platform path.
    • Gary Edwards
       
      Yes, but the new platform for busines process development is that of MSOffice <> Exchange/SharePoint Hub.

      The OOXML-Smart Docs transport replaces the old binary document with OLE and VBA Scripts and Macros functionality.  Which, for the sake of brevity we can call the lead Win32 API dependencies.

      One substantial difference is that OOXML-Smart Docs is Vista Stack ready, while the Win32 API dependencies were desktop bound.

      Another way of looking at this is to see that the old MSOffice platform was great for desktop application integration.  As long as the complete Win32 API was available (Windows + MSOffice + VBA run times), this platform was great for workgroups.  The Line of Business integrated apps were among the most brittle of all client/server efforts, bu they were the best for that generation.

      The Internet offers everyone a new way of integrating data, content and streaming media.  Web applications are capable of loosly coupled serving and consuming of other application services.  Back end systems can serve up data in a number of ways: web services as SOAP, web services as AJAX/REST, or XML data streams as in HTTPXMLRequest or Jabber P2P model.

      On the web services consumption side, it looks like AJAX/REST will be the block buster choice, if the governance and security issues can be managed.

      Into this SOA mash Microsoft will push with a sweeping integrated stack model.  Since the Smart Docs part of the OOXML-Samrt Docs transport equation is totally proprietary, but used throughout the Vista Stack, it will provide Microsoft with an effective customer lockin - OSS lockout point.

2More

Gray Matter : Compatibility Pack for Open XML passes 100 million downloads - 0 views

  • The Compatibility Pack, software that allows you to open, edit and save Open XML format documents in Office XP and 2003 has now been downloaded over 100 million times.
  •  
    Also includes stats in table form indicating that according to Google Search OOXML documents now outnumber ODF documents on the Web, for word processor documents, spreadsheet documents, and presentation documents.
2More

Oracle's Ellison gambles with OpenOffice's future * The Register - 0 views

  • "We encourage the OpenOffice group to quickly build their version of a spread sheet or a word app using JavaFX," Ellison said.
    • Alex Brown
       
      errr, what? And is OO.o's future now under Larry's direct command?!
1More

Microsoft planned to bury XML developer, says federal judge | The Industry Standard - 0 views

  •  
    Maybe the most informative article to date regarding the Microsoft-i4i "custom XML" patent infringement case.  Greg Keizer is trying to dig into the trial records and judicial response.  Looks like for Microsoft, it's business as usual. excerpt: Microsoft knew of the patent held by i4i as early as 2001, but instead set out to make the Canadian developer's software "obsolete" by adding a feature to Word, according to court documents.
1More

Federal future cloudy for Microsoft Word -- Government Computer News - 0 views

  • “We have explained [to federal agencies] ways of moving from Microsoft Word to an i4i implementation of custom XML,” said Michael Vuple, i4i’s founder. “If agencies want custom XML, we are prepared, and we are working on a way for them to use our technology.” The company hasn’t been actively marketing that approach to government so far because it didn’t want to take advantage of the current “unfortunate situation,” he said. But with agencies likely to be asking the question, he said i4i will probably have to take a more proactive stance in the future.
1More

Classes of Fidelity for Document Applications - Rick Jellife - 0 views

  •  
    Rick Jellife weighs in on the OpenOffice ODF- MSOffice OpenXML interop embroglio. His take is to focus on Classes of Fidelity, providing us with a comparative table of fidelity categories. I wonder though if this über document processing approach is anywhere near consistent with the common sense meaning of interoperability to average end-users? IMHO, end-users interpret "interoperability" to mean that compliant applications can exchange documents without loss of information. "..... In my blog last year Is ODF the new RTF or the new .DOC? Can it be both? Do we need either? I raised the question of whether ODF would replace RTF or DOC. I think this issue has come back with a bang with the release of Office 2007 SP2, and I'd like to give another pointer to it for readers who missed it first time around.... "...... OASIS ODF TC has some kind of conformance and testing wing at work, but it is not at all clear that they will deliver anything in this kind of area. Without targetting these classes, ODF's breezy conformance requirements means that ODF conforment software can deliver vastly different kinds of fidelity, yet still accord to the letter of the law (and, indeed, to the spirit of the ODF spec, which allows so many holes) which will cause frustration all-around....." Ouch!
2More

OOXML is defective by design: Microsoft's latest aggression on ODF, codenamed &quot;cast lead&quot; - 0 views

  • nazis
    • Alex Brown
       
      "Nazis", "genocide", "white phosphorous" -- and all about a file format implementation ...
7More

Doug Mahugh : Tracked Changes - 0 views

  • Much was made during the IS29500 standards process of the difference in the size of the ODF and Open XML specifications.&nbsp; This is a good example of where that difference comes from: in this case, a concept glossed over in three vague sentences of the ODF spec gets 17 pages of documentation in the Open XML spec.
    • Alex Brown
       
      This is the nub; OOXML may be overweight, but ODF is severely undernourished as a spec.
  •  
    Alex, I know from your previous writings that you do not regard OOXML as completely specified. But your post might be so misinterpreted. In my view, neither ODF nor OOXML has yet reached the threshold of eligibility as an international standard, completely specifying "clearly and unambiguously the conformity requirements that are essential to achieve the interoperability." ISO/IEC JTC 1 Directives, Annex I. . OOXML is ahead of ODF in some aspects of specificity, but the eligibility finish line remains beyond the horizon for both.
  • ...2 more comments...
  •  
    Paul, that's right - though so far the faulty things in OOXML turn out to be more round the edges as opposed to ODF's central lapses. Still, it's early days in the examination of OOXML so I'm reserving making any firm call on the comparative merits of the specs until I have read a lot (a lot) more. Is there an area of OOXML you'd say was particularly underbaked? I'm quite interested in the fact that neither of these beasts specify scripting languages ...
  •  
    Hi, Alex, Most seriously, there are no profiles and accompanying requirements to enable less featureful apps to round trip documents with more featureful apps, a la W3C Compound Document by Reference Framework. That's an enormous barrier to market entry and interoperability. That defect reacts synergistically with the dearth of semantic conformity requirements, with the incredible number of options including those 500+ identified extension points, and with a compatibility framework for extensions that while a good start leaves implementers far too much discretion in assigning and processing compatibility attributes. There are also major harmonization issues with other standards that get in the way of transformations, where Microsoft originally rolled its own rather than embracing existing open standards. I think it not insignificant that OOXML as a whole is available only under a RAND-Z pledge rather than being available for the entire world. The patent claims need to be identified and worked around or a different rights scheme needs Microsoft management's promulgation. This is a legal interoperability issue as opposed to technical, but an interoperability barrier nonetheless, an "unnecessary obstacle to international trade" in the sense of the Agreement on Technical Barriers to Trade. And absent a change by Microsoft in its rights regime, the work-arounds are technical. This is not to suggest that ODF lacks problems in regard to the way it implements standards incorporated by reference. The creation of unique OASIS namespaces rather than doing the needed harmonizing work with the relevant W3C WGs is a large ODF tumor in need of removal and reconstructive surgery. I'm not sure what is happening with the W3C consultation in that regard. I worked a good part of the time over several months comparing ODF and Ecma 376, evaluating their comparative suitability as document exchange formats. I gave up when it climbed well past 100 pages in length because the de
  •  
    1. Full-featured editors available that are capable of not generating application-specific extensions to the formats? 2. Interoperability of conforming implementations mandatory? 3. Interoperability between different IT systems either demonstrable or demonstrated? 4. Profiles developed and required for interoperability? 5. Methodology specified for interoperability between less and more featureful applications? 6. Specifies conformity requirements essential to achieve interoperability? 7. Interoperability conformity assessment procedures formally established and validated? 8. Document validation procedures validated? 9. Specifies an interoperability framework? 10. Application-specific extensions classified as non-conformant? 11. Preservation of metadata necessary to achieve interoperability mandatory? 12. XML namespaces for incorporated standards properly implemented? (ODF-only failure because Microsoft didn't incorporate any relevant standards.) 13. Optional feature interop breakpoints eliminated? 14. Scripting language fully specified for embedded scripts? 15. Hooks fully specified for use by embedded scripts? 16. Standard is vendor- and application-neutral? 17. Market requirement -- Capable of converging desktop, server, Web, and mobile device editors and viewers? (OOXML better equipped here, but its patent barrier blocks.)
  •  
    Didn't notice that my post before last was chopped at the end until after I had posted the list. Then Diigo stopped responding for a few minutes. Anyway, the list is short summation of my research on the comparative suitabilities of ODF 1.1 and Ecma 376 as document exchange formats, winnowed to the defects they have in common except as noted. The research was never completed because in the political climate of the time, the world wasn't ready to act on the defects. The criteria applied were as objective as I could make them; they were derived from competition law, JTC 1 Directives, and market requirements. I think the list is as good today in regard to IS 29500 as it was then to Ecma 376, although I have not taken an equally deep dive into 29500. You might find the list useful, albeit there is more than a bit of redundancy in it.
1More

Office Web Apps : Silverlight Web Platform Lock-in for MSOffice documents - 0 views

  •  
    How Does Word Web App Get Better With Silverlight? Faster load performance, since typically fewer bytes need to be downloaded before showing the document. Improved text fidelity at 100% zoom. This includes better text spacing and rendering. Greatly improved text fidelity at other zoom levels not 100%. Text will respect settings set in cleartype tuner, so you're able to determine how much (if any) cleartype you'd like to see. The cleartype tuner is available on the web for older versions of Windows, and is included in Windows 7. Improved accuracy of hit highlighting in Find.
15More

Next round of ODF vs OOXML… « CyberTech Rambler - 0 views

  • approval of an standard that wasn’t ready
  • no one at ISO listened
  • The whole OOXML thing is a collection of mistakes
  • ...10 more annotations...
  • in the time frame taken to approve it
  • by National Body to trust that BRM has influence
  • by BRM for not attending to every concerns of national bodies
  • for not incorporating BRM resolutions in the published standard
  • OOXML is fundamentally intended to document a format for a pre-existing technology and feature set of recent proprietary systems.
  • years for IS29500 to have a really good debugged version
  • years for ODF to have a good, complete debugged version
  • the nature of big standards
  • sad about OOXML meeting
  • Apple, Oracle and British Library did not even bothered to turn up
  •  
    Found myself blocked from commenting on that blog entry for some reason. Here's the comment I tried to post. @ctrambler "Between vendor-heavy or user-heavy, I choose vendor-heavy. It is after all, a office document format designed for office application. Linking with other systems is important, but it is not the ultimate aim." That statement bespeaks lack of familiarity with what an IT standard *IS.* But it is a lack of familiarity shared by all too many who work on IT standards. Standards are about uniformity, not variability. An international standard must by law specify [i] all characteristics [ii] of an identifiable product or group of products [iii] only in mandatory "must" or "must not" terms. WTDS 135 EC - Asbestos, (World Trade Organization Appellate Body; 12 March 2001; HTML version), para. 66-70, http://www.wto.org/english/tratop_e/dispu_e/cases_e/ds135_e.htm And IT standards in particular must "clearly and unambiguously specify all conformity requirements that are essential to achieve the interoperability." ISO/IEC JTC 1 Directives, (5th Ed., v. 3.0, 5 April 2007) pg. 145, http://www.jtc1sc34.org/repository/0856rev.pdf Absent such specifications, a standard is a standard in name only. A standard is intended to establish a market in standardized goods, creating economic efficiency and competition. This is perhaps most simply illustrated with weights and measures, where a pound of flour must weigh the same regardless which vendor sells the product. But we can also see it in the interoperability context, e.g., with standardized nuts, bolts, and wrenches. Absent sufficient specificity to enable and require interoperability, ODF and OOXML create technical barriers to trade rather than promoting competition. And the Agreement on Technical Barriers to Trade unambiguously requires that national standardization bodies "shall ensure that technical regulations [includes international standards] are not prepared, adopted or applied with a v
  •  
    (continuation). . And the Agreement on Technical Barriers to Trade unambiguously requires that national standardization bodies "shall ensure that technical regulations [includes international standards] are not prepared, adopted or applied with a view to or with the effect of creating unnecessary obstacles to international trade." http://www.wto.org/english/docs_e/legal_e/17-tbt_e.htm#articleII So while I agree that linking IT systems may not invariably be the ultimate goal, sufficient specificity in an IT standard to do so is in fact a threshold user and legal requirement. Otherwise, one has vendor lock-in and definition of the standard is controlled by the vendor with the largest market share, not the standard itself. Neither ODF nor OOXML met than threshold for eligibility as international standards and still do not. In both cases, national standardization bodies voted to adopt the standards without paying heed to fundamental legal and user requirements.
1 - 20 of 34 Next ›
Showing 20 items per page