Skip to main content

Home/ Document Wars/ Group items matching "ooxml" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Gary Edwards

IBM In Denial Over Lotus Notes - Forbes.com - 0 views

  • The marketing folks in IBM's Lotus division are starting to sound like the Black Knight in Monty Python and the Holy Grail, who insists he's winning a fight even as he loses both arms and legs: "'Tis but a scratch," the Black Knight declares after one arm is lopped off. "Just a flesh wound," he says after losing the other. "I'm invincible!" The same goes for IBM's (nyse: IBM - news - people ) Lotus, which keeps declaring victory even as Microsoft (nasdaq: MSFT - news - people ) carves it up.
  •  
    Want to know the real reason why IBM and Microsoft are going at it hammer and tong over document formats?  Here it is.  Lotus Notes is getting clobbered by the Exchange/SharePoint juggernaut. 

    The article is old, but the point is well taken.  Today the Exchange/SharePoint juggernaut i sover 65% marketshare.  IBM is struggling to protect the Lotus Stack against an impossible foe.

    The thing is, Microsoft E/S will ALWAYS have better integration with the MSOffice - Outlook desktop monopoly base (550 M and counting).  Most of this "integration" is due to the high fidelity exchange of documents in Microsoft's proprietary XML mode known as MS-OOXML.   Forget the charade that MS-OOXML is an open standard called Ecma 376.  MSOffice and infamous XML Compatibility Pack Plug-in do not implement Ecma 376.  The Pack implements MS-OOXML.

    One key differnece between MS-OOXML and Ecma 376 us that MS-OOXML is infused with the Smart Tags components.  These are for metadata, data binding, data extraction, workflow, intelligent routing and on demand re purposing of docuemnt components.  In effect, MS-OOXML :: Smart Tags combines with proprietary .NET Libraries, XAML and soon enough Silverlight to replace the entire span of W3C Open Internet Technologies. 

    Can you say "HTML"?

    Okay, so why does this matter to IBM and the future of Lotus Notes?

    The end game of the document format wars is that of a stack model that converges desktop, server, devices and web information systems.  The MS Stack uses MS-OOXML as the primary transport of accelerated content/data/multi media streams running across the MS Stack of desktop, server, device and web application systems.  It's the one point of extreme interoperability.

    It's also a barrier that no non MS applicatio or service can penetrate or interoperate with except on terms Microsoft dictates. 
Gary Edwards

Linux Foundation Legal : Behind Putting the OpenDocument Foundation to Bed (without its supper) : Updegroove - 0 views

  • CDF is one of the very many useful projects that W3C has been laboring on, but not one that you would have been likely to have heard much about. Until recently, that is, when Gary Edwards, Sam Hiser and Marbux, the management (and perhaps sole remaining members) of the OpenDocument Foundation decided that CDF was the answer to all of the problems that ODF was designed to address. This announcement gave rise to a flurry of press attention that Sam Hiser has collected here. As others (such as Rob Weir) have already documented, these articles gave the OpenDocument Foundation’s position far more attention than it deserved. The most astonishing piece was written by ZDNet’s Mary Jo Foley. Early on in her article she stated that, “the ODF camp might unravel before Microsoft’s rival Office Open XML (OOXML) comes up for final international standardization vote early next year.” All because Gary, Sam and Marbux have decided that ODF does not meet their needs. Astonishing indeed, given that there is no available evidence to support such a prediction.
  •  
    Uh?  The ODF failure in Massachusetts doesn't count as evidence that ODF was not designed to be compatible with existing MS documents or interoperable with existing MSOffice applications?

    And it's not just the da Vinci plug-in that failed to implement ODF in Massachusetts!  Nine months later Sun delivered their ODF plug-in for MSOffice to Massachusetts.  The next day, Massachusetts threw in the towel, officially recognizing MS-OOXML (and the MS-OOXML Compatibility Pack plug-in) as a standard format for the future.

    Worse, the Massachusetts recognition of MS-OOXML came just weeks before the September 2nd ISO vote on MS-OOXML.  Why not wait a few more weeks?  After all, Massachusetts had conducted a year long pilot study to implement ODF using ODF desktop office sutie alternatives to MSOffice.  Not only did the rip out and replace approach fail, but they were also unable to integrate OpenOffice ODF desktops into existing MSOffice bound workgroups.

    The year long pilot study was followed by another year long effort trying to implement ODF using the plug-in approach.  That too failed with Sun's ODF plug-in the final candidate to prove the difficulty of implementing ODF in situations where MSOffice workgroups dominate.

    California and the EU-IDABC were closely watching the events in Massachusetts, as was most every CIO in government and private enterprise.  Reasoning that if Massachusetts was unable to implement ODF, California CIO's totally refused IBM and Sun's effort to get a pilot study underway.

    Across the pond, in the aftermath of Massachusetts CIO Louis Guiterrez resignation on October 4th, 2006, the EU-IDABC set about developing their own file format, ODEF.  The Open Document Exchange Format splashed into the public discussion on February 28th, 2007 at the "Open Document Exchange Workshop" held in Berlin, Germany.

    Meanwhile, the Sun ODF plug-in is fl
Gary Edwards

Notes on Breaking the Web to Ride the Fifth Wave - 1 views

  • garyedwards's Discussions Breaking the Web Talkback: Google: OOXML 'insufficient and unnecessary'
  •  
    Somehow i got involved in this discussion and ended up posting a number of comments explaining the how and why behind Microsoft's push for ISO approval of MS-OOXML. I have been working on a paper titled, "Breaking the Web to Ride the Great Wave". Breaking the Web is what will happen once ISO approves MS-OOXML. The MIcrosoft Stack of Web Servers (Exchange, SharePoint, MS-SQL Server) are integrated into the MSOffice-Outlook desktop. The MS desktop dominates much of the document workflows and business processes of the commercial world. ISO approval of the MSOffice specific MS-OOXML will legitamize MSOffice as an editor of standardized web ready docuemnts. But how MS-OOXML docuemnts become "Web REady" is tricky. In the December 2007 MSOffice SDK beta, we see how this is done. The SDK provides a conversion component for the quick high fidelity conversion of MS-OOXML documents to XAML. XAML is a proprietary part of the WPF (Windows Presentation Foundation) layer of the .NET framework, and is easily paried with Silverlight. Sometimes XAML is referred to as "fixed/flow". XAML is an MS proprietary replacement for the W3C's (X)HTML. Billions of MSOffice docuemnts will make their way to the Web using this SDK converter. The path for transitioning the monopolist hold on desktop business processes to the monopolist stack of web servers is set with this converter. ISO approval of MS-OOXML will enable Microsoft to dodge brining their desktop editor into compliance with advancing W3C standards such as (X)HTML, CSS 3, XForms, SVG and RDF. Instead of these open standards, transitioning business processes will be locked into MS only dependencies; XAML, Silverlight, WinForms, and Smart Tags. The breaking of the web results in a consumer/business cloud dependent on MS proprietary technologies that are out of the reach of Firefox, Apache, Java, and Adobe technologies. Google won't be able to penetrate the business stack, and will be kept very busy trying to defen
Alex Brown

Doug Mahugh : Tracked Changes - 0 views

  • Much was made during the IS29500 standards process of the difference in the size of the ODF and Open XML specifications.  This is a good example of where that difference comes from: in this case, a concept glossed over in three vague sentences of the ODF spec gets 17 pages of documentation in the Open XML spec.
    • Alex Brown
       
      This is the nub; OOXML may be overweight, but ODF is severely undernourished as a spec.
  •  
    Alex, I know from your previous writings that you do not regard OOXML as completely specified. But your post might be so misinterpreted. In my view, neither ODF nor OOXML has yet reached the threshold of eligibility as an international standard, completely specifying "clearly and unambiguously the conformity requirements that are essential to achieve the interoperability." ISO/IEC JTC 1 Directives, Annex I. . OOXML is ahead of ODF in some aspects of specificity, but the eligibility finish line remains beyond the horizon for both.
  • ...2 more comments...
  •  
    Paul, that's right - though so far the faulty things in OOXML turn out to be more round the edges as opposed to ODF's central lapses. Still, it's early days in the examination of OOXML so I'm reserving making any firm call on the comparative merits of the specs until I have read a lot (a lot) more. Is there an area of OOXML you'd say was particularly underbaked? I'm quite interested in the fact that neither of these beasts specify scripting languages ...
  •  
    Hi, Alex, Most seriously, there are no profiles and accompanying requirements to enable less featureful apps to round trip documents with more featureful apps, a la W3C Compound Document by Reference Framework. That's an enormous barrier to market entry and interoperability. That defect reacts synergistically with the dearth of semantic conformity requirements, with the incredible number of options including those 500+ identified extension points, and with a compatibility framework for extensions that while a good start leaves implementers far too much discretion in assigning and processing compatibility attributes. There are also major harmonization issues with other standards that get in the way of transformations, where Microsoft originally rolled its own rather than embracing existing open standards. I think it not insignificant that OOXML as a whole is available only under a RAND-Z pledge rather than being available for the entire world. The patent claims need to be identified and worked around or a different rights scheme needs Microsoft management's promulgation. This is a legal interoperability issue as opposed to technical, but an interoperability barrier nonetheless, an "unnecessary obstacle to international trade" in the sense of the Agreement on Technical Barriers to Trade. And absent a change by Microsoft in its rights regime, the work-arounds are technical. This is not to suggest that ODF lacks problems in regard to the way it implements standards incorporated by reference. The creation of unique OASIS namespaces rather than doing the needed harmonizing work with the relevant W3C WGs is a large ODF tumor in need of removal and reconstructive surgery. I'm not sure what is happening with the W3C consultation in that regard. I worked a good part of the time over several months comparing ODF and Ecma 376, evaluating their comparative suitability as document exchange formats. I gave up when it climbed well past 100 pages in length because the de
  •  
    1. Full-featured editors available that are capable of not generating application-specific extensions to the formats? 2. Interoperability of conforming implementations mandatory? 3. Interoperability between different IT systems either demonstrable or demonstrated? 4. Profiles developed and required for interoperability? 5. Methodology specified for interoperability between less and more featureful applications? 6. Specifies conformity requirements essential to achieve interoperability? 7. Interoperability conformity assessment procedures formally established and validated? 8. Document validation procedures validated? 9. Specifies an interoperability framework? 10. Application-specific extensions classified as non-conformant? 11. Preservation of metadata necessary to achieve interoperability mandatory? 12. XML namespaces for incorporated standards properly implemented? (ODF-only failure because Microsoft didn't incorporate any relevant standards.) 13. Optional feature interop breakpoints eliminated? 14. Scripting language fully specified for embedded scripts? 15. Hooks fully specified for use by embedded scripts? 16. Standard is vendor- and application-neutral? 17. Market requirement -- Capable of converging desktop, server, Web, and mobile device editors and viewers? (OOXML better equipped here, but its patent barrier blocks.)
  •  
    Didn't notice that my post before last was chopped at the end until after I had posted the list. Then Diigo stopped responding for a few minutes. Anyway, the list is short summation of my research on the comparative suitabilities of ODF 1.1 and Ecma 376 as document exchange formats, winnowed to the defects they have in common except as noted. The research was never completed because in the political climate of the time, the world wasn't ready to act on the defects. The criteria applied were as objective as I could make them; they were derived from competition law, JTC 1 Directives, and market requirements. I think the list is as good today in regard to IS 29500 as it was then to Ecma 376, although I have not taken an equally deep dive into 29500. You might find the list useful, albeit there is more than a bit of redundancy in it.
Gary Edwards

Open XML blogging in 2007 - Doug Mahugh - Site Home - MSDN Blogs - 0 views

  •  
    At the height of the Document Wars, Doug Mahugh posted this year end, month to month, blow by blow list of blog assaults. I stumbled upon Doug's collection following up on a recent (December 20th, 2010) eMail comment from Karl.  Karl had been reading the infamous "Hypocrisy 101" blog written by Jesper Lundstocholm:  http://bit.ly/hgCVLV Recently i was researching cloud-computing, following the USA Federal Government dictate that cloud-computing initiatives should get top priority first-consideration for all government agency purchases.  The market is worth about $8 Billion, with Microsoft BPOS and Google Apps totally dominating contract decisions in the early going.  The loser looks to be IBM Lotus Notes since they seem to have held most of systems contracts. So what does this have to do with Hypocrisy 101? To stop Microsoft BPOS, IBM had to get a government mandate for ODF and NOT OOXML.  The reason is now clear.  Microsoft BPOS is dominating the early rounds of government cloud-computing contracts because BPOS is "compatible" with the legacy MSOffice desktop productivity environment.  Lotus symphony is not.  Nor is OpenOffice or any other ODF Office Suite.   This compatibility between BPOS and legacy MSOffice productivity environments means less disruption and re engineering of business process costs as governments make the generational shift from desktop "client/server" productivity to a Web productivity platform - otherwise known as "cloud-computing". IMHO, neither ODF or OOXML were designed for this cloud-computing :: Web productivity platform future.  The "Web" aspect of cloud-computing means that HTML-HTTP-JavaScript technologies will prevail in this new world of cloud-computing.  It's difficult, but not impossible, to convert ODF and OOXML to HTML+ (HTML5, CSS3, Canvas/SVG, JavaScript).  This broad difficulty means that cloud-computing does not have a highly compatible productivity authoring environment designed to meet the transition needs
Gary Edwards

Frankly Speaking: Microsoft's Cynicism - Flock - 0 views

  • In July, Jones was asked on his blog whether Microsoft would actually commit to conform to an officially standardized OOXML. His response: “It’s hard for Microsoft to commit to what comes out of Ecma [the European standards group that has already OK’d OOXML] in the coming years, because we don’t know what direction they will take the formats. We’ll of course stay active and propose changes based on where we want to go with Office 14. At the end of the day, though, the other Ecma members could decide to take the spec in a completely different direction. ... Since it’s not guaranteed, it would be hard for us to make any sort of official statement.”
    • Gary Edwards
       
      Then why is Microsoft dragging us through this standardization nonsense? Is this nothing more than thinly veiled assault on open standards in general?
  • To at least some people at Microsoft, this isn’t about meeting the needs of customers who want a stable, solid, vendor-neutral format for storing and managing documents. It’s just another skirmish with the open-source crowd and rivals like IBM, and all that matters is winning.
    • Gary Edwards
       
      The battle between OOXML and ODF is very much about two groups of big vendor alliances. Interestingly, both groups seek to limit ODF interoperability, but for different reasons.

      See: The Plot To Limit ODF Interop
  •  
    Good commentary from Frank Hayes of Computerworld concerning a very serious problem. Even if ISO somehow manages to approve MS-OOXML, Microsoft has reserved the right to implement whatever extension of Ecma-OOXML they feel like implementing. The whole purpose of this standardization exercise was to bring interoperability, document exchange and long term archive capability to digital information by separating the file formats from the traditions of application, platform and vendor dependence.

    If Microsoft is determined to produce a variation of OOXML that meets the needs of their proprietary application-platform stack, including proprietary bindings and dependencies, any illusions we might have about open standards and interoeprability will be shattered.  By 2008, Microsoft is expected to have over a billion MS-OOXML ready systems intertwined with their proprietary MS Stack of desktop, server, device and web applications. 

    How are we to interoperate/integrate non Microsoft applications and services into that MS Stack if the portable document/data/media transport is off limits?  If you thought the MS Desktop monopoly posed an impossible barrier, wait until the world gets a load of the MS Stack!

    Good article Frank.

    ~ge~

Gary Edwards

OOXML/ODF: Just One Battlefield in a Much Bigger War | Linux Today - 2 views

  • If the OOXML format in its current form cannot get made into a true ISO standard, it could lock Microsoft out of any future plays in what could be the biggest IT revolution to date. Here are the pieces of the puzzle that fit together for me:
  • "Amazon SimpleDB is a web service for running queries on structured data in real time."
  • "Structured data." And what's a good way to contain such data? In well-built structured data file format of course. Like, for instance, the Open Document Format (ODF). And who has a vested interest in ODF? IBM certainly does. And so does Sun. And these two companies, along with Google, Microsoft, and I'm sure many others, realize that if cloud computing does indeed take off, then it will be the file format that makes the whole thing work. Which is why Microsoft feels it must get their format standardized. Even with tactics that ironically have started to attract the attention of the EU again. How else can they get a piece of the cloud pie?
    • Gary Edwards
       
      Partly right. The MS plan is actually much bigger than Brian Profitt suggests. The MSOffice 2007 SDK is fille dwith new API's, the most interesting of which are the ones connecting MSOffice to XAML and the Windows Presentation Foundation layer. The killer component though is the OOXML <> fixed/flow translator component with related API's. fixed/flow is a new web format that is 100% proprietary. It's at the heart of the Microsoft cloud, enabling developers to easily transition between OOXML and IE browsers able to serve fixed/flow pages to devices, desktops and just about any kind re purposing publication - content management system imaginable. If ISO approves OOXML, then they've standardized MSOffice as a legitamate Web editor - enterprise publication, content management, archive management front end. Instead of producing W3C compliant (X)HTML - CSS web pages though, the MSOffice Web editor will produce the proprietary fixed/flow format via the OOXML translation component we can now see in the SDK. What we don't see in the MSOffice SDK is the use of W3C technologies such as (X)HTML, CSS, SVG, XForms, SMiL, XSL, XSL-FO. Instead of Mozilla XUL or Adobe Flex, we find XAML and Silverlight. IMHO, Microsoft is making their run for the Web. Key to this run is ISO approval of OOXML. Once that happens, there will be no need for MS product compliance with W3C standards. The break will be complete. The We forever split into the Windows Web, and the Firefox - Apache Tomcat Web. And never the twain shall meet.
Graham Perrin

ODF versus OOXML: Don't forget about HTML! - O'Reilly XML Blog - 0 views

  • Don't forget about HTML
  • February 25, 2007
  • HTML’s potential and actual suitability for much document interchange
  • ...27 more annotations...
  • HTML is the format to consider first
  • validated, standards compliant XHTML in particular
  • HTML at one end (simple WP documents)
  • PDF at the other end (full page fidility but read-only)
  • W3C versus ISO
  • HTML, ODF, OOXML, PDF
  • Lie adopts an extreme view towards overlap of standards:
  • overlap at all brings nothing but misery and bloat.
  • The next dodgy detail is to make blanket comparisons between HTML and ODF/OOXML.
  • ODF and OOXML deal with many issues that HTML/CSS simply does not.
  • the W3C argument might be to say that every part should have a URL
  • a strange theory that MS wants ODF and OOXML to both fail
  • being pro-ODF does not mean you have have to be anti-OOXML
  • HTML is the format of choice for interchange of simple documents
  • ODF will evolve to be the format of choice for more complicated documents
  • OOXML is the format of choice for full-fidelity dumps from MS Office
  • PDF is the format of choice for non-editable page-faithful documents
  • all have overlap
  • we need to to encourage a rich library of standard technologies,
  • widely deployed,
  • free,
  • unencumbered,
  • explicit,
  • awareness of when each is appropriate
  • an adequate set of profiles and profile validators
  • using ISO Schematron
  • Plurality
Graham Perrin

Next round of ODF vs OOXML… « CyberTech Rambler - 0 views

  • approval of an standard that wasn’t ready
  • no one at ISO listened
  • for not incorporating BRM resolutions in the published standard
  • ...10 more annotations...
  • in the time frame taken to approve it
  • by National Body to trust that BRM has influence
  • by BRM for not attending to every concerns of national bodies
  • The whole OOXML thing is a collection of mistakes
  • OOXML is fundamentally intended to document a format for a pre-existing technology and feature set of recent proprietary systems.
  • years for IS29500 to have a really good debugged version
  • years for ODF to have a good, complete debugged version
  • the nature of big standards
  • sad about OOXML meeting
  • Apple, Oracle and British Library did not even bothered to turn up
  •  
    Found myself blocked from commenting on that blog entry for some reason. Here's the comment I tried to post. @ctrambler "Between vendor-heavy or user-heavy, I choose vendor-heavy. It is after all, a office document format designed for office application. Linking with other systems is important, but it is not the ultimate aim." That statement bespeaks lack of familiarity with what an IT standard *IS.* But it is a lack of familiarity shared by all too many who work on IT standards. Standards are about uniformity, not variability. An international standard must by law specify [i] all characteristics [ii] of an identifiable product or group of products [iii] only in mandatory "must" or "must not" terms. WTDS 135 EC - Asbestos, (World Trade Organization Appellate Body; 12 March 2001; HTML version), para. 66-70, http://www.wto.org/english/tratop_e/dispu_e/cases_e/ds135_e.htm And IT standards in particular must "clearly and unambiguously specify all conformity requirements that are essential to achieve the interoperability." ISO/IEC JTC 1 Directives, (5th Ed., v. 3.0, 5 April 2007) pg. 145, http://www.jtc1sc34.org/repository/0856rev.pdf Absent such specifications, a standard is a standard in name only. A standard is intended to establish a market in standardized goods, creating economic efficiency and competition. This is perhaps most simply illustrated with weights and measures, where a pound of flour must weigh the same regardless which vendor sells the product. But we can also see it in the interoperability context, e.g., with standardized nuts, bolts, and wrenches. Absent sufficient specificity to enable and require interoperability, ODF and OOXML create technical barriers to trade rather than promoting competition. And the Agreement on Technical Barriers to Trade unambiguously requires that national standardization bodies "shall ensure that technical regulations [includes international standards] are not prepared, adopted or applied with a v
  •  
    (continuation). . And the Agreement on Technical Barriers to Trade unambiguously requires that national standardization bodies "shall ensure that technical regulations [includes international standards] are not prepared, adopted or applied with a view to or with the effect of creating unnecessary obstacles to international trade." http://www.wto.org/english/docs_e/legal_e/17-tbt_e.htm#articleII So while I agree that linking IT systems may not invariably be the ultimate goal, sufficient specificity in an IT standard to do so is in fact a threshold user and legal requirement. Otherwise, one has vendor lock-in and definition of the standard is controlled by the vendor with the largest market share, not the standard itself. Neither ODF nor OOXML met than threshold for eligibility as international standards and still do not. In both cases, national standardization bodies voted to adopt the standards without paying heed to fundamental legal and user requirements.
Jesper Lund Stocholm

Groklaw - When Would You Use OOXML and When ODF? -- What is OOXML For? - 0 views

  • The legacy formats are just popped into an OOXML wrapper
    • Alex Brown
       
      Funny how often this old canard is brought out. Do people really belive it?
    • Jesper Lund Stocholm
       
      I actually think is is - to some extent - true. Apart from stuff like DrawingML, CustomML etc, OOXML is a transformation of the binary stuff and hence in essence the same document format. "Someone" told me the other day that he had knowledge of a company that didn't use the "xml-ness" of OOXMLto manipulate OOXML-files but simply considered them TEXT-files. They could do this because OOXML is very close to the binary formats.
    • Alex Brown
       
      True, but the stuff inside is XML -- I think there's a widespread view that OOXML is a lot of lightly wrapped BLOBs
    • Jesper Lund Stocholm
       
      Ok - you are possibly correct. Somehow content in a file called printerSettings.bin seem to attract higher disturbance than base64-encoded, binary attribute values with attribute name "printerSettings"
    • Jesper Lund Stocholm
       
      Actually, I think the phrase someone coined that "OOXML is just the binary document formats dressed up in angle brackets" fits just fint :o)
  • Whoa, whoa, whoa! - Authored by: Anonymous on Friday, May 01 2009 @ 02:21 AM EDT
  • Whoa, whoa, whoa! - Authored by: Anonymous on Friday, May 01 2009 @ 03:17 AM EDT
  •  
    It fits just fine for most of the spec but there are also major chunks that include descriptive element and attribute names, for example, the compatibility markup volume. My sense is that these are areas where new features were introduced in Office 2007. But they kind of fly in the face of the Microsoft claims back when that the abbreviated markup was deliberately chosen to maximize execution speed. If so, why isn't all the markup in abbreviated form?
Gary Edwards

Microsoft attacks UK government decision to adopt ODF for document formats - 0 views

  • the panel reached consensus that one standard is important to ensure interoperability and to allow users to collaborate effectively on the same document,” said the minutes
  • A subsequent meeting of the same panel also considered a detailed comparison of ODF and OOXML, citing concerns raised by one member. “We need to make sure there is sound reasoning to back up the decision as this may incur significant costs to some government departments. The comparison may be slightly skewed by concentrating solely on implementation of strict OOXML, which is an emerging standard similar to ODF 1.3, whilst considering implementations of all ODF versions. It ignores transitional OOXML which does have very wide support, arguably wider than ODF,” said the meeting minutes.
  • “LH described the issues identified in the [comparison] document and added that there has since been some confusion about support for OOXML strict in LibreOffice.&nbsp; It appears that LibreOffice supports the standardised transitional OOXML, as well as a different Microsoft version of transitional OOXML,” the minutes stated.
  • ...3 more annotations...
  • Despite its obvious disappointment at the government’s decision, Microsoft was also keen to point out that its software does fully support ODF.
  • “The good news for Office users is that Office 365 and Office 2013 both have excellent support for the ODF file format, so their current and future investments in Office are safe.&nbsp; In fact, Office 365 remains the only business productivity suite on the UK government’s G-Cloud that is accredited to the government’s own security classification of 'Official' and which also supports ODF,” said the Microsoft spokesman.
  • Government Digital Service director Mike Bracken
  •  
    "Microsoft has attacked the UK government's decision to adopt ODF as its standard document format, saying it is "unclear" how UK citizens will benefit. The Cabinet Office announced its new policy yesterday, whereby Open Document Format (ODF) is immediately established as the standard for sharing documents across the public sector, with PDF and HTML also acceptable when viewing documents. SERGIGN - FOTOLIA The decision was a rejection of Microsoft's preference for Open XML (OOXML), the standard used by its Word software, which remains the dominant wordprocessor in government. "Microsoft notes the government's decision to restrict its support of the file formats it uses for sharing and collaboration to just ODF and HTML," said a spokesman for the software giant in a statement to Computer Weekly. "Microsoft believes it is unproven and unclear how UK citizens will benefit from the government's decision. We actively support a broad range of open standards, which is why, like Adobe has with the PDF file format, we now collaborate with many contributors to maintain the Open XML file format through independent and international standards bodies," it added"
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gary Edwards

The Merging of SOA and Web 2.0: 2 - 0 views

  • In many cases, the mashups' data or information sources have incompatible formats so integration becomes a problem.
  •  
    Great article series from eWeek.  A must read.  But it all comes down to interoperability across two stack models:  The Microsoft Vista Stack, and an alternative Open Stack model that does not yet exist!

    Incompatible formats become a nightmare for the kind of integration any kind of SOA implementation depends on, let alone the Web 2.0 AJAX MashUps this article focuses on.

    I wonder why eWEEK didn't include the Joe Wilcox Micrsoft Watch Article, "Obla De OBA Da".  Joe hit hard on the connection between OOXML and the Vista Stack.  He missed the implications this will have on MS SOA solutions.  Open Source SOA solutions will be locked out of the Vista Stack.  And with 98% or more of existing desktop business processes bound to MSOffice, the transition of these business processes to the Vista Stack will no doubt have a dramatic impact on the marketplace.  Before the year is out, we'll see Redmond let loose with a torrent of MS SOA solutions.  The only reason they've held back is that they need to first have all the Vista Stack pieces in place.

    I don't think Microsoft is being held back by OOXML approval at ISO either.  ISO approval might have made a difference in Europe in 2006, but even there, the EU IDABC has dropped the ISO requirement.  For sure ISO approval means nothing in the US, as California and Massachusetts have demonstrated. 

    All that matters to State CIO's is that they can migrate exisiting docuemnts and business processes to XML.  The only question is, "Which XML?  OOXML, ODF or XHTML+".

    The high fidelity conversion ratio and non disruptive OOXML plugin for MSOffice has certainly provided OOXML with the edge in this process. <br
Gary Edwards

Novell adds fuel to the fire in OOXML feud - News - Builder AU - 0 views

  • Microsoft has created its own proprietary document format, Office Open XML (OOXML), as a rival to the community-developed OpenDocument Format (ODF). OOXML is used in Microsoft's latest applications suite, Office 2007. Despite some efforts by the two camps, ODF and OOXML are, for the most part, not interoperable, meaning documents that are created in one format cannot be successfully read by applications based on the other format. According to Novell's vice president of developer platforms, Miguel de Icaza, the situation won't change in the foreseeable future. Want to know more? For all the latest news, analysis and opinion on open source, click here "There's no end in sight to the ongoing disputes between the two camps," said de Icaza, speaking at XML 2007, a Microsoft-sponsored event, on Tuesday. "In 2006, there was lots of FUD [fear, uncertainty and doubt] about the problems behind OOXML and it went downhill from there," Icaza said. "Neither group is willing to make the big changes required for real compatibility," de Icaza added.
    • Gary Edwards
       
      What efforts are you talking about? The last time any effort was made to accomodate interoperability was in 2003 with the establishment of the ODF "Compatibilty clause" (Section 1.5). "Despite some efforts by the two camps, ODF and OOXML are, for the most part, not interoperable, meaning documents that are created in one format cannot be successfully read by applications based on the other format......" Section 1.5 authorizes the use of "foreign elements" and "alien attributes". These techniques were specifically written into ODF for handling unknown characteristics of existing MSOffice documents (binary and/or xml) on conversion to ODF. Since the Section 1.5 addition in 2003, every other suggestion to improve interop between ODF and MSOffice documents has been rejected by the OASIS ODF TC and Sub Committees. There are three problems with Section 1.5. The first is that there is only so much that can be done with foreign elements and alien attributes. There are still remaining compatibility issues relating to the basic structures of lists, tables, fields, sections and page dymnamics. The OpenDocument Foundaiton spent over a year trying to get approval for five generic elements relating to these structures, without success. As i said, there has not been a single successful comatibility - interoperability effort since 2003, although many have been proposed. The second reason for the failure of Section 1.5 is that OpenOffice only partially implements the "Compatibility Clause". OOo only recognizes "foreign elements and alien attributes" with text spans and paragraphs. The third reason is that "compatibility" is optional in ODF. The clause does not have any teeth. Applications can implement only those aspects of the spec they feel like implementing, and still be in total "compliance". This creates serious interop problem not only for MSOffice plug-in comverted documents, but also renders as
Gary Edwards

War rages on over Microsoft's OOXML plans: Insight - Software - ZDNet Australia - 0 views

  • "We feel that the best standards are open standards," technology industry commentator Colin Jackson, a member of the Technical Advisory committee convened by StandardsNZ to consider OOXML, said at the event. "In that respect Microsoft is to be applauded, as previously this was a secret binary format." Microsoft's opponents suggest, among a host of other concerns, that making Open XML an ISO standard would lock the world's document future to Microsoft. They argue that a standard should only be necessary when there is a "market requirement" for it. IBM spokesperson Paul Robinson thus describes OOXML as a "redundant replacement for other standards". Quoting from the ISO guide, Robinson said that a standard "is a document by a recognised body established by consensus which is aimed at achieving an optimum degree of order and aimed at the promotion of optimum community benefits". It can be argued that rather than provide community benefit, supporting multiple standards actually comes at an economic cost to the user community. "We do not believe OOXML meets these objectives of an international standard," Robinson said.
  •  
    "aimed at achieving an optimum degree of order .... and .... aimed at the promotion of optimum community benefits:. Uh, excuse me Mr. Robinson, tha tsecond part of your statement, the one concerning optimum community benefits - that would also disqualify ODF!! ODF was not designed to be compatible with the 550 million MSOffice desktops and their billions of binary docuemnts. Menaing, these 550 million users will suffer considerable loss of information if they try to convert their existing documents to ODF. It is also next to impossible for MSOffice applications to implement ODF as a fiel format due to this incompatbility. ODF was designed for OpenOffice, and directly reflects the way OpenOffice implements specific document structures. The problem areas involve large differences between how OpenOffice implments these structures and how MSOffice implements these same structures. The structures in question are lists, fields, tables, sections and page dynamics. It seems to me that "optimum community benefits" would include the conversion and exchange of docuemnts with some 550 million users!!!! And ODF was clearly not designed for that purpose!
  •  
    I don't agree with this statement from Microsoft's Oliver Bell. As someone who served on the OASIS ODF Technical Committee from it's inception in November of 2002 through the next five years, i have to disagree. It's not that Microsoft wasn't welcome. They were. It's that the "welcome" came with some serious strings. Fo rMicrosoft to join OASIS would have meant strolling into the camp of their most erstwhile and determined competitors, and having to ammend an existing standard to accomodate the implementation needs of MSOffice. There is simply no way for the layout differences between OpenOffice and MSOffice to be negotiated short of putting both methodologies into the spec. Meaning, the spec would provide two ways of implementing lists, tables, fields, sections and page dynamics. A true welcome would have been for ODF to have been written to accomodate these diferences. Rather than writing ODF to meet the implementation model used by OepnOffice, it would have been infinitely better to wrtite ODF as a totally application independent file format using generic docuemnt structures tha tcould be adapted by any application. It turns out that this is exactly the way the W3C goes about the business of writing their fiel format specifications (HTML, XHTML, CSS, XFORMS, and CDF). The results are highly interoperable formats that any applciation can implement.
  •  
    You can harmonize an application specific format with a generic, applicaiton independent format. But you can't harmonize two application specific formats!!!!
    The easy way to solve the document exchange problem is to leave the legacy applications alone, and work on the conversion of OOXML and ODF docuemnts to a single, application independent generic format. The best candidate for this role is that of the W3C's CDF.
    CDF is a desription of how to combine existing W3C format standards into a single container. It is meant to succeed HTML on the Web, but has been designed as a universal file format.
    The most exciting combination is that of XHTML 2.0 and CSS in that it is capable of handling the complete range of desktop productivity office suite documents. Even though it's slightly outside the W3C reach, the most popular CDF compound is that of XHTML, CSS and JavaScript. A combination otherwise known as "AJAX".
Gary Edwards

The Harmonization Myth: ISO Approval of Open XML Will Hurt Interoperability - 0 views

  • This myth is rather silly if you think about it. Here is why… When people talk about interoperability and Open XML they do so primarily in the context of ODF. The story goes something like this: 1. Open XML is not interoperable with ODF 2. Open XML should be interoperable with ODF because ODF is already an ISO standard! 3. Hence: Open XML is no good, because it is not interoperable with ODF and therefore Open XML should not be an ISO standard!!!
    • Gary Edwards
       
      Forget ISO approval of OOXML. I would rather see ISO enforce the current directive that ODF be brought into compliance with existing ISO Interoperability requirements. Then and only then should ISO then consider OOXML.
      The reason for this approach? If ODF wiere compliant with existing ISO Interop Requirements, there would probably be some hope of harmonizing ODF and OOXML. Until ODF is stripped of it's application specific settings, and fully documented, we can hardly beging the process of figuring out harmonization.
      ODF 1.0 has four gapping holes that must be tended to before ISO proceeds any furhter with either ODF or OOXML. The holes are that ODF numbered lists, formulas and the presentation layer (styles) are woefully underspecified. The fourth problem is that ODF is seriously lacking an interoperability framework.
      These ODF problems can of course be traced back to the fact that ODF is application specific and bound to the "semantics and capabilities" of OpenOffice. That creates all kinds of problems. OOXML on the other hand is even worse. OOXML is application, platform and vendor specific!!!! If ODF were brought up to snuff, we could reasonably start work on harmonization. Thereby eliminating the need to standardize two file formats for the same purposes. Until ODF is fixed, what's the world to do?
      ~ge~
Gary Edwards

XML.com: Standard Data Vocabularies Unquestionably Harmful - 0 views

  • At the onset of XML four long years ago, I commenced a jeremiad against Standard Data Vocabularies (SDVs), to little effect. Almost immediately after the light bulb moment -- you mean, I can get all the cool benefits of web in HTML and create my own tags? I can call the price of my crullers &lt;PricePerCruller&gt;, right beside beside &lt;PricePerDonutHole&gt; in my menu? -- new users realized the problem: a browser knows how to display a heading marked as &lt;h1&gt; bigger and more prominently than a lowlier &lt;h3&gt;. Yet there are no standard display expectations or semantics for the XML tags which users themselves create. That there is no specific display for &lt;Cruller&gt; and, especially, not as distinct from &lt;DonutHole&gt; has been readily understood to demonstrate the separation of data structure expressed in XML from its display, which requires the application of styling to accomodate the fixed expectations of the browser. What has not been so readily accepted is that there should not be a standard expectation for how a data element, as identified by its markup, should be processed by programs doing something other than simple display.
    • Gary Edwards
       
      ODF and OOXML are contending to become the Standard Data Vocabulary for desktop office suite XML markup. Sun and Microsoft are proposing the standardization of OpenOffice and MSOffice custom defined XML tags for which there are no standard display expectations. The display expectations must therefore be very carefully described: i.e. the semantics of display fully provided.
      In this article Walter Perry is pointing out the dangers of SDV's being standardized for specific purposes without also having well thought out and fully specified display semantics. In ODF - OOXML speak, we would call display presentation, or layout, or "styles".
      The separation of content and presentation layer of each is woefully underspecified!
      Given that the presnetation layers of both ODF and OOXML is directly related to how OpenOffice and MSOffice layout engines work, the semantics of display become even more important. For MSOffice to implement an "interoperable" version of OpenOffice ODF, MSOffice must be able to mimic the OpenOffice layout engine methods. Methods which are of course quite differeent from the internal layout model of MSOffice. This differential results in a break down of conversion fidelity, And therein lies the core of the ODF interoeprability dilemma!
  • There have also emerged a few "horizontal" data vocabularies, intended for expressing business communication in more general terms. One of these is the eXtensible Business Reporting Language (XBRL), about which more below. Most recently, governments and governmental organizations have begun to suggest and eventually mandate particular SDVs for required filings, a development which expands what troubles me about these vocabularies by an order of magnitude.
  • ...5 more annotations...
    • Gary Edwards
       
      Exactly! When governments mandate a specific SDV, they also are mandating inherent concepts and methods unique to the provider of the SDV. In the case of ODF and OOXML, where the presentation layers are application specific and woefully underspecified, interoperability becomes an insurmountable challenge. Interop remains stubbornly application bound.
      Furthermore, there is no way to "harmonize" or "map" from one format to another without somehow resolving the application specific presentation differences.
    • Gary Edwards
       
      "in the nature of the SDV's themselves is the problem of misstatement, of misdirection of naive interpretation, and potential for fraud.
      Semantics matter! The presentation apsects of a document are just as important as the content.
    • Gary Edwards
       
      Walter: "I have argued for years that, on the basis of their mechanism for elaborating semantics, SDVs are inherently unreliable for the transmission or repository of information. They become geometrically less reliable when the types or roles of either the sources or consumers of that information increase, ending at a nightmarish worst case of a third-order diminution of the reliability of information. And what is the means by which SDVs convey meaning? By simple assertion against the expected semantic interpretations hard-coded into a process consuming the data in question.
      At this point in the article i'm hopign Walter has a solution. How do we demand, insist and then verify that SDV's have fully specifed the semantics, and not jus tpassed along the syntax?
      With ODF and OOXML, this is the core of the interoperability problem. Yet, there really is no way to separate the presentation layers from the uniquely different OpenOffice and MSOffice layout engine models.
    • Gary Edwards
       
      Interesting concept here: "the bulk of expertise is in understanding the detail of connections between data and the processes which produced it or must consume it ........ it is these expert connections which SDV's are intended to sever.
      Not quite sure what to make of that statement? When an SDV is standardized by ISO, the expectation is that the connections between data and processes would be fully understood, and implementations consistent across the board.
      Sadly, ODF is ISO approved, but doesn't come close to meeting these expectations. ODF interop might as well be ZERO. And the only way to fix it is to go into the presentation layer of ODF, strip out all the application specific bindings, and fully specifiy the ssemantics of layout.
  • In short, the bulk of expertise is in understanding the detail of connections between data and the processes which produced it or must consume it. It is precisely these expert connections which standard data vocabularies are intended to sever.
Gary Edwards

ODF vs. OOXML: War of the Words | Andrew Updegrove: Tales of Adversego - 0 views

  •  
    "For some time I've been considering writing a book about what has become a standards war of truly epic proportions.  I refer, of course, to the ongoing, ever expanding, still escalating conflict between ODF and OOXML, a battle that is playing out across five continents and in both the halls of government and the marketplace alike.  And, needless to say, at countless blogs and news sites all the Web over as well. Arrayed on one side or the other, either in the forefront of battle or behind the scenes, are most of the major IT vendors of our time.  And at the center of the conflict is Microsoft, the most successful software vendor of all time, faced with the first significant challenge ever to one of its core businesses and profit centers - its flagship Office productivity suite. The story has other notable features as well:  ODF is the first IT standard to be taken up as a popular cause, and also represents the first "cross over" standards issue that has attracted the broad support of the open source community.  Then there are the societal dimensions: open formats are needed to safeguard our culture and our history from oblivion.  And when implemented in open source software and deployed on Linux-based systems (not to mention One Laptop Per Child computers), the benefits and opportunities of IT become more available to those throughout the third world. There is little question, I think, that regardless of where and how this saga ends, it will be studied in business schools and by economists for decades to come.  What they will conclude will depend in part upon the materials we leave behind for them to examine.  That's one of the reasons I'm launching this effort now, as a publicly posted eBook in progress, rather than waiting until some indefinite point in the future when the memories of the players in this drama have become colored by the passage of time and the influence of later events. My hope is that those of you who have played or are n
Gary Edwards

Sun Supports OOXML as an ISO Standard? - 0 views

  • Sun Microsystems Inc., largely considered an avowed opponent of Open XML because of its own development and support for the competing, ODF-based StarOffice suite, found itself in the unexpected position of stating its support for ratifying Open XML -- albeit after some changes in the proposal are made.
  •  
    Quote: Sun Microsystems Inc., largely considered an avowed opponent of Open XML because of its own development and support for the competing, ODF-based StarOffice suite, found itself in the unexpected position of stating its support for ratifying Open XML -- albeit after some changes in the proposal are made. "We wish to make it completely clear that we support DIS 29500 becoming an ISO Standard and are in complete agreement with its stated purposes of enabling interoperability among different implementations and providing interoperable access to the legacy of Microsoft Office documents," Jon Bosak, a Sun representative to V1, wrote in an e-mail to other committee members over the weekend. "Sun voted No on Approval because it is our expert finding, based on the analysis so far accomplished in V1, that DIS 29500 as presently written is technically incapable of achieving those goals, not because we disagree with the goals or are opposed to an ISO Standard that would enable them." Sun "found itself in the unexpected position of stating its support for ratifying OOXML"?  What???? This is the official position of Sun?

    For the near five years that i have been a member of the OASIS ODF TC, Sun has opposed
Gary Edwards

ODF Civil War: Bulll Run - Suggested Changes on the Metadata proposal - OASIS ODF - 0 views

  • From our perspective it would be better to aim for doing the job in ODF 1.2, even if that requires delay. We will oppose ODF 1.2 at ISO unless the interoperability warts are cleaned up. What the market requires is no longer in doubt. See the slides linked above and further presentations linked from this page, &lt; http://ec.europa.eu/idabc/en/document/6474/5935&gt;. Substantial progress toward those goals would seem to be mandatory to maintain Europe's preference for a harmonized set of file formats that uses ODF to provide the common functionality. Delaying commencement of such work enhances the likelihood that governments will tire of waiting for ODF to become interoperable with MS Office and simply go with MOOXML. We may not be able to force Microsoft to participate in the harmonization work, but we will be in a far better position if we have done everything we can in aid of that interoperability without Microsoft's assistance. As the situation stands, we have what is known in the U.S. as a "Mexican stand-off," where neither side has taken a solitary step toward what Europe has requested. We have decided to do that work via a fork of ODF; it is up to this TC whether it wishes to cooperate in that effort.
  •  
    This is the famous marbux response to Sun regarding Sun's attempt to partially implement ODF 1.2 XML-RDF metadata.  It's a treasure.

    There is one problem with marbux's statement though.  We had decided long ago not to fork ODF even if the five iX "interoperability enhancement" proposals were refused by the OASIS ODF TC.   This assurance was provided to Massachusetts CIO Louis Gutierrez witht he the first ODF iX proposal submitted on July 12th, 2006.  Louis ended up signing off on three iX proposals before his resignation October 4th, 2006.

    The ODF iX enhancements were essential to saving ODF in Massachusetts.  Without them, there was no way our da Vinci plug-in could convert existing MSOffice documents and processes to ODF with the needed round trip fidelity.

    For nearly a year we tried to push through some semblance of the needed iX enhancements.  We also tried to push through a much needed Interoperability Framework, which will be critical to any ISO approval of ODF 1.2.

    Our critics are correct in that every iX effort was defeated, with Sun providing the primary opposition. 

    Still rather than fork ODF, we are simply going to move on. 

    On October 4th, 2006, all work on ODF da Vinci ended - not to be resumed unless and until we had the ODF iX enhancements we needed to crack the MSOffice bound workgroup-workflow business process barrier.

    In April of 2007, with our OASIS membership officially shredded by OASIS management, bleeding from the List Enhancement Proposal doonybrook, and totally defeated with our hope - the metadata XML-RDF work, we threw in the towel.

    Since then we've moved on to CDF, the W3C Compound Document format.  Incredibly, CDF is able to do what ODF can not.  With CDF we can solve the three primary problems confronting governments and MSOffice bound workgroups everywhere. 

    The challenge for these g
‹ Previous 21 - 40 of 315 Next › Last »
Showing 20 items per page