Skip to main content

Home/ Document Wars/ Group items tagged open-source

Rss Feed Group items tagged

Gary Edwards

AlphaDog Barks Loudly: Why Can't You Guys Just Get Along and Solve MY MSOffice Problem!... - 0 views

  • First, let me say that I am a CIO in a small (20 employees but growing fast) financial services company. I am well aware of how locked-in I am getting with our MS-only shop. I am trying to see my way out of it, but this "ODF vs ODFF" is leaving me very confused and no one is working to clear the fog. I beg for all parties to really work towards some sort of defined understanding. I don't need cooperation. But, what I don't have is well-defined positions from all parties. As it is, I feel safer staying the course with MS right now, honestly. It's what I know vs the mystery of this "open cloud" and all the bellicose infighting. How's that for "in the trenches" data? I posted a comment on Andy's blog, and I will post the same comment here for your group (minor edits): I will admit to being very, very confused by all of this ODF vs ODFF posturing. I will try to put my current thoughts in short form, but it will be a muddled mess. I warned you! From what I gather, the OpenDocument Foundation (ODFF) is attempting to create more of an interop format for working against a background MS server stack (Exchange/Sharepoint). You worry that MS is further cementing their business lock-in by moving more and more companies into dependency on not only the client-side software but also the MS business stack that has finally evolved into a serious competitive set. At that level, and in your view, the "atomic unit" is the whole document. The encoded content is not of immediate concern. ODF is concerned with the actual document content, which ODFF is prepared to ignore. The "atomic unit" is the bits and parts in the document. They want to break the proprietary encodings that MS has that lock people into MSOffice. The stack is not of any immediate concern. So, unless I misunderstand either camp, ODF is first attacking the client end of the stack, and ODFF is attacking the backbone server end of the stack. The former wants to break the MSOffice monopoly by allowing people to escape those proprietary encodings, and the latter wants to prevent the dependency on server software like Exchange and Sharepoint by allowing MS documents to travel to other destinations than MS "server" products. Is this correct? I have yet to see anyone summarize the differences in any non-partisan way, so I am at a loss and not enough information is forthcoming for me to see what's what. The usual diatribe by people closer to the action is to go into the history of ODF or ODFF, talk about old slights and lost fights, and somehow try to pull at emotional heartstrings so as to gain mindshare. Gary's set of comments on this blog have that flavor. This is childish on both sides. Furthermore, the word "orthogonal" comes to mind. I often see people too busy arguing their POV, and not listening to others, when there is no real argument to keep making. It's apple-and-oranges. ODF vs ODFF seems like they are caught in this trap. Everyone wants to win an argument that has no possible win because the participants are not arguing about the same thing. Tell me: Why can't the two parties get along? I can see a "cooperative" that attacks the entire stack. Am I the only one seeing this? Am I wrong? If yes, what's the fundamental difference that prevents cooperation?
  •  
    AlphaDog When asked about the source of his incredible success, the hockey great Wayne Gretzky replied, "I skate to where the puck is going to be, not where it has been." You and i need to do the same. Let me state our position as this: The desktop office suite is where the puck has been. The Exchange/SharePoint Hub is where it's going to be. The E/S Hub is the core of an emerging Microsoft specific web platform which we've also called, the MS Stack. In this stack, MSOffice is relegated to the task of a rich client end user interface into the E/S Hub of business processes and collaborative computing connections. The rest of the MS Stack swirls like a galaxy of services around the E/S Hub. Key to Microsoft's web platform is the gradual movement of MSOffice bound business processes to the E/S Hub where they connect to the rest of the MS Stack. So what now you might ask? Some things to consider before we get down to brass tacks: ... There is a way to break the monopolists MSOffice desktop grip, but it's not a rip out and replace the desktop model. It's a beat them at the E/S Hub model that then opens up the desktop space. And opens it up totally. (this is a 3-5 year challenge though since it's a movement of currently bound business processes). ... It's all about the business processes. Focusing entirely on the file formats is to miss the big picture. ... The da Vinci group's position is this; we believe we can neutralize and re purpose MSOffice by converting in proce
Gary Edwards

Readium at the London Book Fair 2014: Open Source for an Open Publishing Ecosystem: Rea... - 0 views

  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-Android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-Android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
Gary Edwards

Microsoft's Quest for Interoperability and Open Standards - 0 views

  •  
    Interesting article discussing the many ways Microsoft is using to improve the public perception that they are serious about interoperability and open formats, protocols and interfaces. Rocketman attended the recent ISO SC34 meeting in Prague and agrees that Microsoft has indeed put on a new public face filled with cooperation, compliance and unheard of sincerity.

    He also says, "Don't be fooled!!!"

    There is a big difference between participation in vendor consortia and government sponsored public standards efforts, and, actual implementation at the product level. Looking at how Microsoft products implement open standards, my take is that they have decided on a policy of end user choice. Their applications offer on the one hand the choice of aging, near irrelevant and often crippled open standards. And on the other, the option of very rich and feature filled but proprietary formats, protocols and interfaces that integrate across the entire Microsoft platform of desktop, devices and servers. For instance; IE8 supports 1998 HTML-CSS, but not the advanced ACiD-3 "HTML+" used by WebKit, Firefox, Opera and near every device or smartphone operating at the edge of the Web. (HTML+ = HTML5, CSS4, SVG/Canvas, JS, JS Libs).

    But they do offer advanced .NET-WPF proprietary alternative to Open Web HTML+. These include XAML, Silverlight, XPS, LINQ, Smart Tags, and OOXML. Very nice.

    "When an open source advocate, open standards advocate, or, well, pretty much anyone that competes with Microsoft (news, site) sees an extended hand from the software giant toward better interoperability, they tend to look and see if the other hand's holding a spiked club.

    Even so, the Redmond, WA company continues to push the message that it has seen the light regarding open standards and interoperability...."

Gary Edwards

Standardization by Corporation | Can big application vendors be stopped from corrupting... - 0 views

  • Standardization by Corporation Maybe i spoke to soon. This just came in from ISO, the resignation letter of the SC34WG1 Chairman who has completed his three year term. There is a fascinating statement at the end of the Martin Bryan letter. "The disparity of rules for PAS, Fast-Track and ISO committee generated standards is fast making ISO a laughing stock in IT circles. The days of open standards development are fast disappearing. Instead we are getting “standardization by corporation”, something I have been fighting against for the 20 years I have served on ISO committees. I am glad to be retiring before the situation becomes impossible..." When corporations join open standards or open source efforts, they arrive with substantial but most welcome financial and expert resources. They also bring marketshare and presence. And, they bring business objectives. They have a plan. As long as the corporate plan is aligned with the open standards - open source community work, all is fine. In fact it's great. For sure though there will come a time when the corporate plan asserts it's direction, and there is possible conflict. At this point, the very same wealth of resources that were cause for celebration can become cause for disappointment and disaster. One of the more troubling things i've noticed is that corporations treat everything as a corporate asset to be traded, bartered and dealt for shareholder advantage and value. This includes patents and interoperability issues which not surprisingly are wrapped into open standards and open source efforts. Rather than embrace the humanitarian – community of shared interest drivers of open standards and open source, corporations naturally plot to get maximum value out of the resources they commit. A primary example of this is Sun's use of OpenOffice, ODF, and an anti trust settlement disaster that left them at the mercy of Microsoft.
  •  
    Will ISO follow either the AFNOR or Brittish proposals to merge ODF and OOXML? I think so. If they continue on their current path of big vendor sponsored document wars, ISO will beocme irrelevant. Sooner or later the ISO National Bodies must take back the standards process from corporate corruption and influence. One thing is clear. Neither Microsoft or IBM is about to compromise. IBM has had many chances to improve ODF's interoperability with Microsoft Office and the Office documents, but has been steadfast in their stubborn refusal to concede an inch. Microsoft hides behind their legacy installed base of over 550 million MSOffice desktops. There simply isn't a pragmatic or cost effective way of transitioning the installed base to ODF without either seriously re writing and replacing those applications, or, changing ODF to be compatible. The marketplace is clear on what they intend on doing. Pragmatism will rule. Productivity trumps standards initiatives whenever they are out of sink. In the face of this clear marketplace intent, one would think IBM might compromise on ODF. No way! They are intent on using ODF to force a market wide rip out and replace of MSOffice. Most people assume that there are two opposing groups at war here; the Microsoft OOXML group vs. the IBM ODF group. This isn't an accurate view at all. There is a third, middle group of developers working the treacherous space of conversion - the no man'sland between OOXML MSOffice and ODF OpenOffice. The conversion group know the problems involved, and are actually trying to dliver marketplace facing solutions. The vendors of course are in this war to the bitter end, and could care less about the damage they cause to end users. It's also true that the conversion group seeks to bridge desktop productivity into the larger, highly interoeprable web platform. It's also possible that ISO will chose to merge
Gary Edwards

Brian Jones: Open XML Formats : Mapping documents in the binary format (.doc; .xls; .pp... - 0 views

  • The second issue we had feedback on was an interest in the mapping from the binary formats into the Open XML formats. The thought here was that the most effective way to help people with this was to create an open source translation project to allow binary documents (.doc; .xls; .ppt) to be translated into Open XML. So we proposed the creation of a new open source project that would map a document written using the legacy binary formats to the Open XML formats. TC45 liked this suggestion, and here was the TC45 response to the national body comments: We believe that Interoperability between applications conforming to DIS 29500 is established at the Office Open XML-to- Office Open XML file construct level only.
    • Gary Edwards
       
      And here i was betting that the blueprints to the secret binaries would be released the weekend before the September 2nd, 2007 ISO vote on OOXML! Looks like Microsoft saved the move for when they really had to use it; jus tweeks before the February ISO Ballot Resolution Meetings set to resolve the Sept 2nd issues. The truth is that years of reverse engineering have depleted the value of keeping the binary blueprints secret. It's true that interoperability with MSOffice in the past was near entirely dependent on understanding the secret binaries. Today however, with the rapid emergence of the Exchange/SharePoint juggernaught, interop with MSOffice is no longer the core issue. Now we have to compete with E/S, and it is the E/S interfaces, protocols and document API's and dependencies tha tmust be reverse engineered. The E/S juggernaught is now surging to 70% or more of the market. These near monopoly levels of market penetration is game changing. One must reverse engineer or license the .NET libraries to crack the interop problem. And this time it's not just MSOffice. Today one must crack into the MS Stack whose core is tha tof MSOffice <> E/S. So why not release the secret binary blueprints? If that's the cost of getting the application, platform and vendor specific OOXML through ISO, then it's a small price to pay for your own international standard.
  •  
    Well well well. We knew that IBM had access to the secret binary blueprints back in 2006. Now we know that Sun ALSO had access!
    And why is this important? In June of 2006, Massachusetts CIO Louis Gutierrez asked the OpenDocument Foundation's da Vinci Group to work with IBM on developing the da Vinci ODF plug-in clone of Microsoft's OOXML Compatibility Pack plug-in. When we met with IBM they were insistent that the only way OASIS ODF could establish sufficient compatibility with MSOffice and the billions of binary documents would be to have the secret blueprints open.
    Even after we explained to IBM that da Vinci uses the same internal conversion process that the OOXML plug-in used to convert binaries, IBM continued to insist that opening up the secret binaries was a primary objective of the OASIS ODF community.
    For sure this was important to IBM and Sun, but the secret binaries were of no use to us. da Vinci didn't need them. What da Vinci needed instead was a subset of ODF designed for the conversion of those billions of binary documents! A need opposed by Sun.
    Sun of course would spend the next year developing their own ODF plug-in for MSOffice. But here's the thing: it turns out that Sun had complete access to the secret binary blueprints dating back to 2006!!!!!!
    So even though IBM and Sun have had access to the blueprints since 2006, they have been unable to provide effective conversions to ODF!
    This validates a point the da Vinci group has been trying to make since June of 2006: the problem of perfecting a high fidelity conversion between the billions of binaries and ODF has nothing to do with access to the secret binary blueprints. The real issue is that ODF was NOT designed for the conversion of those binary documents.
    It is true that one could eXtend ODF to achieve the needed compatibility. But one has to be very careful before taking this ro
Gary Edwards

Groklaw - Digging for Truth : The problem with XML document formats - 0 views

  • The problem with that, as I understand it, is that the transitional spec is pretty much unimplementable by anybody except MS
    • Jesper Lund Stocholm
       
      Well, herein lies the problem, dude ... you don't understand it.
  •  
    Wow! The ODF peasants with pitchforks are have taken to the streets, and ISO document expert Alex Brown is taking them on. The volumes of traffic generated by any discussion of the ISO XML document wars continues to amaze. It's very one sided though. The basic problem seems to be that ISO has accepted two XML document format standards, OOXML and ODF, with OOXML being held to a higher set of expectations than ODF. Alex would do well if he could step back from the OOXML - ODF war, and move the discussion to something like the theoretical IDABC ODEF: the European "Open Document Exchange Formats" design. With ODEF as single set of XML format requirements against which both OOXML and ODF can be measured and compared, Alex might be able to neutralize the heated emotions of angry Open Source - Open Standards - Open Web supporters, who mistakenly think ODF measures up to ODEF expectations and requirements. Trying to compare ODF to OOXML isn't getting us anywhere. At some point, we have to ask ourselves what is it that we want from a standardized XML document format. Having participated in both the Massachusetts pilot study and the California pilot discussions, i have to say that the public expectations were that XML formats would have a basic set of characteristics: open markup; structured separation of content, presentation and logic; high level interoperability (exchange), and Web ready. These are basic "must have" expectations. XML formats were expected to be "better" than 1998 HTML-CSS. But when we apply the basic set of expectations, todays HTML+ (webkit HTML5, CSS4, SVG/Canvas, JS, JS Libs) turns out to be a far better format. Where the XML formats really fall off the wagon are the interoperability and Web ready expectations. For the life of me i don't see how anyone can compare ODF or OOXML interoperability with that of HTML+. And of course, HTML+ is the native language/for
  •  
    Jesper Lund Stocholm was kind enough to point out that, once again, GrokLaw is stoking the fires of the XML document wars. This time PJ takes on Alex Brown, of the ISO SC34 document standards group convenor. And Alex responds ... and responds ... and responds. of course, the attacks keep coming! I left Jesper a rather lengthy comment at: http://tinyurl.com/document-wars
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gary Edwards

Microsoft Watch Finally Gets it - It's the Business Applications!- Obla De OBA Da - 0 views

  • To be fair, Microsoft seeks to solve real world problems with respect to helping customers glean more value from their information. But the approach depends on enterprises adopting an end-to-end Microsoft stack—vertically from desktop to server and horizontally across desktop and server products. The development glue is .NET Framework, while the informational glue is OOXML.
    • Gary Edwards
       
      OOXML is the transport - a portable XML document model where the "document" is the interface into content/data/ and media streaming.

      The binding model for OOXML is "Smart Documents", and it is proprietary!

      Smart Documents is how data, streaming media, scripting-routing-workflow intelligence and metadata is added to any document object.

      Think of the ODF binding model using XForms, XML/RDF and RDFA metadata. One could even use Jabber XMP as a binding model, which is how we did the Comcast SOA based Sales and Inventory Management System prototype.

      Interestingly, Smart Documents is based on pre written widgets that can simply be dragged, dropped and bound to any document object. The Infopath applicaiton provides a highly visual means for end users to build intelligent self routing forms. But Visual Studio .NET, which was released with MSOffice 2007 in December of 2006. makes it very easy for application and line of business integration developers to implement very advanced data binding using the Smart Document widgets.

      I would also go as far to say that what separates MSOOXML from Ecma 376 is going to be primarily Smart Documents.

       Yes, there are .NET Framework Libraries and Vista Stack dependencies like XAML that will also provide a proprietary "Vista Stack" only barrier to interoperability, but Smart Documents is a killer.

      One company that will be particularly hurt by Smart Documents is Google. The reason is that the business value of Google Search is based on using advanced and closely held proprietary algorithms to provide metadata structure for unstrucutred documents.

      This was great for a world awash in unstructured documents. By moving the "XML" structuring of documents down to the author - workgroup - workflow application level though, the world will soon enough be awash in highly structured documents that have end user metadata defining document objects and
  • Microsoft seeks to create sales pull along the vertical stack between the desktop and server.
    • Gary Edwards
       
      The vertical stack is actually desktop - server - device - web based.  The idea of a portable XML document is that it must be able to transition across the converged application space of this sweeping stack model.

      Note that ODF is intentionally limited to the desktop by it's OASIS Charter statement.  One of the primary failings of ODF is that it is not able to be fully implemented in this converged space.  OOXML on the other hand was created exactly for this purpose!

      So ODF is limited to the desktop, and remains tightly bound to OpenOffice feature sets.  OOXML differs in that it is tightly bound to the Vista Stack.

      So where is an Open Stack model to turn to?

      Good question, and one that will come to haunt us for years to come.  Because ODF cannot move into the converged space of desktop to server to device to the web information systems connected through portable docuemnt/data transport, it is unfit as a candidate for Universal File Format.

      OOXML is unfi as a UFF becuase it is application - platform and vendor bound.

      For those of us who believe in an open and unencumbered universal file format, it's back to the drawing board.

      XHTML+ (XHTML + CSS3 + RDF) is looking very good.  The challenge is proving that we can build plugins for MSOffice and OpenOffice that can fully implement XHTML+.  Can we conver the billions of binary legacy documents and existing MSOffice bound business processes to XHTML+?

      I think so.  But we can't be sure until the da Vinci proves this conclusively.

      One thign to keep in mind though.  The internal plugins have already shown that it is possible to do multiple file formats.  OOXML, ODF, and XML encoded RTF all have been shown to work, and do so with a level of two way conversion fidelity demanded by existing business processes.

      So why not try it with XHTML+, or ODEF (the eXtended version of ODF en
  • Microsoft's major XML-based format development priority was backward compatibility with its proprietary Office binary file formats.
    • Gary Edwards
       
      This backwards compatibility with the existing binary file formats isn't the big deal Micrsoft makes it out to be.  ODF 1.0 includes a "Conformance Clause", (Section 1.5) that was designed and included in the specification exactly so that the billions of binary legacy documents could be converted into ODF XML.

      The problem with the ODF Conformance Clause is that the leading ODF application, OpenOffice,  does not fully support and implement the Conformance Clause. 

      The only foreign elements supported by OpenOffice are paragraphs and text spans.  Critically important structural document characteristics such as lists, fields, tables, sections and page breaks are not supported!

      This leads to a serious drop in conversion fidelity wherever MS binaries are converted to OpenOffice ODF.

      Note that OpenOffice ODF is very different from MSOffice ODF, as implemented by internal conversion plugins like da Vinci.  KOffice ODF and Googel Docs ODF are all different ODF implementations.  Because there are so many different ways to implement ODF, and still have "conforming" ODF documents, there is much truth to the statement that ODF has zero interoperabiltiy.

      It's also true that OOXML has optional implementation areas.  With ODF we call these "optional" implementation areas "interoperabiltiy break points" because this is exactly where the document exchange  presentation fidelity breaks down, leaving the dominant market ODF applicaiton as the only means of sustaining interoperabiltiy.

      With OOXML, the entire Vista Stack - Win32 dependency layer is "optional".  No doubt, all MSOffice - Exchange/SharePoint Hub applications will implement the full sweep of proprietary dependencies.    This includes the legacy Win32 API dependencies (like VML, EMF, EMF +), and the emerging Vista Stack dependencies that include Smart Documents, XAML, .NET 3.0 Libraries, and DrawingML.

      MSOffice 2007 i
  • ...6 more annotations...
  • Microsoft's backwards compatibility priority means the company made XML-based format decisions that compromise the open objectives of XML. Open Office XML is neither open nor XML.
    • Gary Edwards
       
      True, but a tricky statement given that the proprietary OOXML implementation is "optional".  It is theoretically possible to implement Ecma 376 without the prorpietary dependencies of MSOffice - Exchange/SharePoint Hub - Vista Stack "OOXML".

      In fact, this was first demonstrated by the legendary document processing - plugin architecture expert, Florian Reuter.

      Florian has the unique distinction of being the primary architect for two major plugins: the da Vinci ODF plugin for MSOffice, and, the Novell OOXML Translator plugin for OpenOffice!

      It is the Novell OOXML Translator Plugin for OpenOffice that first demonstrated that Ecma 376 could be cleanly implemented without the MSOffice application-platform-vendor specific dependencies we find in every MSOffice OOXML document.

      So while Joe is technically correct here, that OOXML is neither open nor XML, there is a caveat.  For 95% of all desktops and near 100% of all desktops in a workgroup, Joe's statment holds true.  For all practical concerns, that's enough.  For Microsoft's vaunted marketing spin machine though, they will make it sound as though OOXML is actually open and application-platform-vendor independent.


  • Microsoft got there first to protect Office.
    • Gary Edwards
       
      No. I disagree. Microsoft needs to move to XML structured documents regardless of what others are doing. The binary document model is simply unable to be useful to any desktop- to server- to device- to the web- transport!

      Many wonder what Microsoft's SOA strategy is. Well, it's this: the Vista Stack based on OOXML-Smart Documents-.NET.

      The thing is, Microsoft could not afford to market a SOA solution until all the proprietary solutions of the Vista Stack were in place.

      The Vista Stack looks like this:

      ..... The core :: MSOffice <> OOXML <> IE <> The Exchange/SharePoint Hub

      ..... The services :: E/S HUb <> MS SQL Server <> MS Dynamics <> MS Live <> MS Active Directory Server <> MSOffice RC Front End

      The key to the stack is the OOXML-Smart Documents capture of EXISTING MSOffice bound business processes and documents.

      The trick for Microsoft is to migrate these existing business processes and documents to the E/S Hub where line of business developers can re engineer aging desktop LOB apps.

      The productivity gains that can be had through this migration to the E/S Hub are extraordinary.

      A little over a year ago an E/S Hub verticle market application called "Agent Achieve" came out for the real estate industry. AA competed against a legacy of twenty years of contact management based - MLS data connected desktop shrinkware applications. (MLS-Multiple Listing Service)

      These traditional desktop client/server productivity apps defined the real estate business process as far as it could be said to be "digital".  For the most part, the real estate transaction industry remains a paper driven process. The desktop stuff was only useful for managing clients and lead prospecting. No one could crack the electronic documents - electonic business transaction model.  This will no doubt change with the emer
  • Microsoft can offer businesses many of the informational sharing and mining benefits associated with the markup language while leveraging Office and supporting desktop and server products as the primary consumption conduit.
    • Gary Edwards
       
      Okay, now Joe has the Micrsoft SOA bull by the horns.  Why doesn't he wrestle the monster down?
  • By adapting XML
    • Gary Edwards
       
      The requirements of these E/S Hub systems are XP, XP MSOffice 2003 Professional, Exchange Server with OWL (Outlook on the Web) , SharePoint Server, Active Directory Server, and at least four MS SQL Servers!

      In Arpil of 2006, Microsoft issued a harsh and sudden End-of-Life for all Windows 2000 - MSOffice 2000 systems in the real estate industry (although many industries were similarly impacted). What happened is that on a Friday afternoon, just prior to a big open house weekend, Microsoft issued a security patch for all Exchange systems. Once the patch was installed, end users needed IE 7.0 to connect to the Exchange Server Systems.

      Since there is no IE 7.0 made for Windows 2000, those users relying on E/S Hub applications, which was the entire industry, suddenly found themselves disconnected and near out of business.

      Amazingly, not a single user complained! Rather than getting pissed at Microsoft for the sudden and very disruptive EOL, the real estate users simply ran out to buy new XP-MSOffice 2003 systems. It was all done under the rational that to be competitive, you have to keep up with technology systems.

      Amazing. But it also goes to show how powerfully productive the E/S Hub applications can be. This wouldn't have happened if the E/S Hub applications didn't have a very high productivity value.

      When we visited Massachusetts in June of 2006, to demonstrate and test the da Vinci ODF plugin for MSOffice, we found them purchasing en mass E/S Hubs! These are ODF killers! Yet Microsoft sales people had convinced Massachusetts ITD that Exchange/SahrePoint was a simple to use eMail-calendar-portal system. Not a threat to anyone!

      The truth is that in the E/S Hub ecosystem, OOXML is THE TRANSPORT. ODF is a poor, second class attachment of no use at the application - document processing chain level.

      Even if Massachusetts had mandated ODF, they were only one E/S Hub Court Doc
  • Microsoft will vie for the whole business software stack, a strategy that I believe will be indisputable by early 2009 at the latest.
    • Gary Edwards
       
      Finally, someone who understands the grand strategy of levergaing the desktop monopoly into the converged space of server, device and web information systems.

      What Joe isn't watching is the way the Exchange/SharePoint Server connects to MS SQL Server, Active Directory Server, MS LIve and MS Dynamics.

      Also, Joe does not see the connection between OOXML as the portable XML document/data transport, and the insidiously proprietary Smart Documents metadata - data binding system that totally separates MSOOXML from Ecma 376 OOXML!
  • I'm convinced that Office as a platform is an eventual dead end. But Microsoft is going to lead lots of customers and partners down that platform path.
    • Gary Edwards
       
      Yes, but the new platform for busines process development is that of MSOffice <> Exchange/SharePoint Hub.

      The OOXML-Smart Docs transport replaces the old binary document with OLE and VBA Scripts and Macros functionality.  Which, for the sake of brevity we can call the lead Win32 API dependencies.

      One substantial difference is that OOXML-Smart Docs is Vista Stack ready, while the Win32 API dependencies were desktop bound.

      Another way of looking at this is to see that the old MSOffice platform was great for desktop application integration.  As long as the complete Win32 API was available (Windows + MSOffice + VBA run times), this platform was great for workgroups.  The Line of Business integrated apps were among the most brittle of all client/server efforts, bu they were the best for that generation.

      The Internet offers everyone a new way of integrating data, content and streaming media.  Web applications are capable of loosly coupled serving and consuming of other application services.  Back end systems can serve up data in a number of ways: web services as SOAP, web services as AJAX/REST, or XML data streams as in HTTPXMLRequest or Jabber P2P model.

      On the web services consumption side, it looks like AJAX/REST will be the block buster choice, if the governance and security issues can be managed.

      Into this SOA mash Microsoft will push with a sweeping integrated stack model.  Since the Smart Docs part of the OOXML-Samrt Docs transport equation is totally proprietary, but used throughout the Vista Stack, it will provide Microsoft with an effective customer lockin - OSS lockout point.

Gary Edwards

What Oracle Sees in Sun Microsystems | NewsFactor Network - 0 views

  • Citigroup's Thill estimates Oracle could cut between 40 percent and 70 percent of Sun's roughly 33,000 employees. Excluding restructuring costs, Oracle expects Sun to add $1.5 billion in profit during the first year after the acquisition closes this summer, and another $2 billion the following year. Oracle executives declined to say how many jobs would be eliminated.
  • Citigroup's Thill estimates Oracle could cut between 40 percent and 70 percent of Sun's roughly 33,000 employees. Excluding restructuring costs, Oracle expects Sun to add $1.5 billion in profit during the first year after the acquisition closes this summer, and another $2 billion the following year. Oracle executives declined to say how many jobs would be eliminated.
  •  
    Good article from Aaron Ricadela. The focus is on Java, Sun's hardware-Server business, and Oracle's business objectives. No mention of OpenOffice or ODf though. There is however an interesting quote from IBM regarding the battle between Java and Microsoft .NET. Also, no mention of a OpenOffice-Java Foundation that would truly open source these technologies.

    When we were involved with the Massachusetts Pilot Study and ODF Plug-in proposals, IBM and Oracle lead the effort to open source the da Vinci plug-in. They put together a group of vendors known as "the benefactors", with the objective of completing work on da Vinci while forming a patent pool - open source foundation for all OpenOffice and da Vinci source. This idea was based on the Eclipse model.

    One of the more interesting ideas coming out of the IBM-Oracle led "benefactors", was the idea of breaking OpenOffice into components that could then be re-purposed by the Eclipse community of developers. The da Vinci plug-in was to be the integration bridge between Eclipse and the Microsoft Office productivity environment. Very cool. And no doubt IBM and Oracle were in synch on this in 2006. The problem was that they couldn't convince Sun to go along with the plan.

    Sun of course owned both Java and OpenOffice, and thought they could build a better ODF plug-in for OpenOffice (and own that too). A year later, Sun actually did produce an ODF plug-in for MSOffice. It was sent to Massachusetts on July 3rd, 2007, and tested against the same set of 150 critical documents da Vinci had to successfully convert without breaking. The next day, July 4th, Massachusetts announced their decision that they would approve the use of both ODF and OOXML! The much hoped for exclusive ODF requirement failed in Massachusetts exactly because Sun insisted on their way or the highway.

    Let's hope Oracle can right the ship and get OpenOffice-ODF-Java back on track.

    "......To gain
Gary Edwards

Real World Government Open Source - Bill Welty and Government OSS Technology - 0 views

  • Welty gave a rapid-fire look at the realities of open source in government. "The doors have been blown open in California," he said. "In 2004 when Governor Schwarzenegger signed the California Performance Review a section called State Operations #10 specifically authorized the use of open source." Operations #10 says: "Departments should take an inventory of software purchases and software renewals in the Fiscal Year 2004-2005 and implement open source alternatives where feasible."
  •  
    If there is an Open Source hero in government, it's Bill Welty.  This article pulished in Government Technology covers the GOSCON 2006 conference where Bill spoke to the realities of open source software in government.  Bill will also be speaking at the October 2007 GOSCON Conference in Portland Oregon.  He doesn't pull his punches :)  Even with Microsoft, IBM, Novell, and Sun sitting across the table.
Gary Edwards

ODF and OOXML - The Final Act - 0 views

  • The format war between Microsoft’s Open Office XML (OOXML) and the open source OpenDocument Format (ODF) has flared up again, right before the looming second OOXML ISO vote in March.
  • “ISO has a policy that, wherever possible, there should only be one standard to maximise interoperability and functionality. We have an international standard for digital documentation, ODF,” IBM’s local government programs executive Kaaren Koomen told AustralianIT.
  • ODF has garnered some criticism for being a touch limited in scope, however, one of its strengths is that it has already been accepted as a worldwide ISO standard. Microsoft’s format on the other hand, has been criticised for being partially proprietary, and even a sly attempt by the software giant to hedge its bets and get in on open standards while keeping as many customers locked into its solutions as possible.
    • Gary Edwards
       
      A "touch limited in scope"? Youv'e got to be kidding. ODF was not defined to be compatible with the billions of MSOffice binary (BIN) documents. Nor was it designed to further interoperability with MSOffice.
      Given that there are over 550 million MSOffice desktops, representing upwards of 95% of all desktop productivity environments, this discrepancy of design would seem to be a bit more than a touch limited in scope!
      Many would claim that this limitation was due to to factors: first that Microsoft refused to join the OASIS ODF TC, which would have resulted in an expanded ODF designed to meet the interoperability needs of the great herd of 550 million users; and second, that Microsoft refused to release the secret binary blueprints.
      Since it turns out that both IBM and Sun have had access to the secret binary blueprints since early 2006, and in the two years since have done nothing to imptove ODF interop and conversion fidelity, this second claim doesn't seem to hold much water.
      The first claim that Microsoft didn't participate in the OASIS ODF process is a bit more interesting. If you go back to the first OASIS ODF Technical Committee meeting, December 16th, 2002, you'll find that there was a proposal to ammend the proposed charter to include the statemnt that ODF (then known as Open Office XML) be compatible with existing file formats, including those of MSOffice. The "MSOffice" reference was of course not included because ODF sought to be application, platform and vendor independent. But make no mistake, the discussion that day in 2002 was about compatibility and the conversion of the legacy BIN's into ODF.
      The proposal to ammend the charter was tabled. Sun objected, claiming that people would interpret the statement as a direct reference to the BIN's, clouding the charter's purpose of application, platform and vendor independence. They proposed that the charter ammendment b
    • Gary Edwards
       
      Will harmonization work? I don't think so. The problem is that the DIN group is trying to harmonize two application specific formats. OpenOffice has one way of implementing basic document structures, and MSOffice another. These differences are directly reflected in the related formats, ODF and OOXML. Any attempts to harmonize ODF and OOXML will require that the applications, OpenOffice and MSOffice, be harmonized! There is no other way of doing this unless the harmonized spec has two different methods for implementing basic structures like lists, tables, fields, sections and page dynamics. Not to mention the problems of feature disparities. If the harmonized spec has two different implementation models for basic structures, interoeprability will suffer enormously. And interoperability is after all the prupose of the standardization effort. That brings us to a difficult compromise. Should OpenOffice compromise it's "innovative" features and methods in favor of greater interoperability with MSOffice and billions of binary documents? Let me see, 100 million OpenOffice installs vs. 550 MSOffice installs bound to workgroup-workflow business processes - many of which are critical to day to day business operations? Sun and IBM have provided the anser to this question. They are not about to compromise on OpenOffice innovation! They believe that since their applications are free, the cost of ODF mandated "rip out and replace" is adequately offset. Events in Massachusetts prove otherwise! On July 2nd, 2007, Sun delivered to Massachusetts the final version of their ODF plug-in for MSOffice. That night, after reviewing and testing the 135 critical documents, Massachusetts made a major change to their ETRM web site. They ammended the ETRM to fully recognize OOXML as an acceptable format standard going forward. The Massachusetts decision to overturn th
  • ...1 more annotation...
    • Gary Edwards
       
      The Burton Group did not recommend that ISO recognize OOXML as a standard! They pointed out that the marketplace is going to implement OOXML by default simply because it's impossible to implement ODF in situations where MSOffice dominates. ISO should not go down the slippery slope of recognizing application-platform-vendor specific standards. They already made that mistake with ODF, and recognizing OOXML is hardly the fix. What ISO should be doign is demanding that ODF fully conform with ISO Interoeprability Requirements, as identified in the May 2006 directive! Forget OOXML. Clean up ODF first.
  •  
    Correcto mundo! There should be only one standard to maximise interoeprability and functionality. But ODF is application specific to the way OpenOffice works. It was not designed from a clean slate. Nor was the original 2002 OpenOffice XML spec designed as an open source effort! Check the OOo source code if you doubt this claim. The ONLY contributors to Open Office XML were Sun employees! What the world needs is in fact a format standard designed to maximise interoperability and functionality. This requires a total application-platofrm-vendor independence that neither ODF or OOXML can claim. The only format that meets these requirements is the W3C's family of HTML-XML formats. These include advancing Compound Docuemnt Framework format components such as (X)HTML-5, CSS-3, XForms, SVG and SMiL.. The W3C's CDF does in fact meet the markeplace needs of a universal format that is open, unencumbered and totally application, platform and vendor independent. The only trick left for CDF is proving that legacy desktop applications can actually implement conversions from existing in-memory-binary-representations to CDF without loss of information.
Gary Edwards

ODF Turns Five | Linux - 2 views

  •  
    ODF was created on the principles that interoperability and innovation were paramount, and that these are based on open standards. Not coincidentally, ODF's creation coincided with the growing support of open ICT architectures, which grew from the Web model where the standardization of HTML, an open, royalty-free standard, enabled the Web to be an open platform that enabled much innovation on top of it. The key was interoperability, or the ability of multiple parties to communicate electronically, without the need to all run the same application software or operating system. Also critical to the development of ODF was the introduction of OpenOffice.org, the open source office suite that first implemented the format, and the rise of XML as a widely supported foundational standard for describing structured data.
Gary Edwards

Is Open Source Dying? - 0 views

  • But behind the scenes, things are not quite as rosy. The Commonwealth of Massachusetts, which lived up to its left-leaning credentials (didn't Microsoft CEO Steve Ballmer famously upbraid open-source proponents for being Communists?) broke important ground by mandating that state agencies switch to open-source platforms. There's just one problem: They can't seem to manage the transition. Sources close to the situation tell me that former state CIO Peter Quinn's resignation happened at least in part because of delaying tactics by vendors who publicly support open source but do their best to scuttle it behind the scenes.
  •  
    Interesting topic which i've covered more fully with the OpenStack Blog : Connecting the Dots
Gary Edwards

ODF vs. OOXML: War of the Words | Andrew Updegrove: Tales of Adversego - 0 views

  •  
    "For some time I've been considering writing a book about what has become a standards war of truly epic proportions.  I refer, of course, to the ongoing, ever expanding, still escalating conflict between ODF and OOXML, a battle that is playing out across five continents and in both the halls of government and the marketplace alike.  And, needless to say, at countless blogs and news sites all the Web over as well. Arrayed on one side or the other, either in the forefront of battle or behind the scenes, are most of the major IT vendors of our time.  And at the center of the conflict is Microsoft, the most successful software vendor of all time, faced with the first significant challenge ever to one of its core businesses and profit centers - its flagship Office productivity suite. The story has other notable features as well:  ODF is the first IT standard to be taken up as a popular cause, and also represents the first "cross over" standards issue that has attracted the broad support of the open source community.  Then there are the societal dimensions: open formats are needed to safeguard our culture and our history from oblivion.  And when implemented in open source software and deployed on Linux-based systems (not to mention One Laptop Per Child computers), the benefits and opportunities of IT become more available to those throughout the third world. There is little question, I think, that regardless of where and how this saga ends, it will be studied in business schools and by economists for decades to come.  What they will conclude will depend in part upon the materials we leave behind for them to examine.  That's one of the reasons I'm launching this effort now, as a publicly posted eBook in progress, rather than waiting until some indefinite point in the future when the memories of the players in this drama have become colored by the passage of time and the influence of later events. My hope is that those of you who have played or are n
Gary Edwards

5 Things Microsoft Must Do To Reclaim Its Mojo In 2008 -- InformationWeek - 0 views

  • Instead of fighting standards, Microsoft (NSDQ: MSFT) needs to get on board now more than ever. With open, Web-based office software backed by the likes of IBM (NYSE: IBM) (think Lotus Symphony) and Google (NSDQ: GOOG) now a viable option, users—especially businesses frustrated by Microsoft's format follies (many are discovering that OOXML is not even fully backwards-compatible with previous versions of Microsoft Word)--can now easily switch to an online product without having to rip and replace their entire desktop infrastructure.
    • Gary Edwards
       
      This article discusses how Microsoft might change their ways and save the company. This particular quote concerns Microsoft support for standards, and their fight to push MS OOXML through ISO as an alternative to ISO approved ODF 1.0.
      The thing is, ODF was not designed for the conversion of MSOffice documents, of which there are billions. Nor was ODF designed to be implemented by MSOffice. ODF was designed exactly for OpenOffice, which has a differnet model for impementing basic docuemnt structures than MSOffice.
      So a couple of points regardign this highlight:
      The first is that IBM's Lotus Symphony is NOT Open Source. IBM ripped off the OpenOffice 1.1.4 code base back when it was dual licensed under both SSSL and LGPL. IBM then closed the source code adding a wealth of proprietary eXtensions (think XForms and Lotus Notes connections). Then IBM released the proprietary Symphony as a free alternative to the original Open Source Community "OpenOffice.org".
      If Microsoft had similarly ripped off an open source community, there would be hell to pay.
      Another point here is the mistaken assumption that users can easily switch from MSOffice to an on-line product like Google Docs or ZOHO "without having to rip our and replace their entire desktop infrastructure."
      This is a ridiculous assumption defied by the facts on the ground. Massqchusetts spent two years trying to migrate to ODF and couldn't do it. Every other pilot study known has experienced the same difficulties!
      The thing about Web 2.0 alternatives is that these services can not be integrated into existing business processes and MSOffice workgroup bound activities. The collaborative advantages of Web 2.0 alternatives are disruptive and outside existing workflows, greatly marginalizing their usefulness. IF, and that's a big IF, MSOffice plug-ins were successful in the high fidelity round trip conversion of wor
  • Microsoft in 2008 could make a bold statement in support of standards by admitting that its attempt to force OOXML on the industry was a mistake and that it will work to develop cross-platform compatibility between that format and the Open Document Format
    • Gary Edwards
       
      It's impossible to harmonize two application specific file formats. The only way to establish an effective compatibility between ODF and OOXML would be to establish a compatibility between OpenOffice and MSOffice.
      The problem is that neither ODF or OOXML were developed as generirc file formats. They are both application specific, directly reflecting the particular implementation models of OOo and MSOffice.
      Sun and the OASIS ODF TC are not about to compromise OpenOffice feature sets and implmentation methods to improve interop with MSOffice. Sun in particular will protect the innovative features of OpenOffice that are reflected in ODF and stubbornly incompatible with MSOffice and the billions of binary documents. This fact can easily be proven be any review of the infamous "List Enhancement Proposal" that dominated discussions at the OASIS ODF TC from November of 2006 through May of 2007.
      So if Sun and the OASIS ODF TC refuse to make any efforts towards compatibility and imporved interop with MSOffice and the billions of binary docuemnts seekign conversion to ODF, then it falls to Microsoft to alter MSOffice. With 550 million MSOffice desktops involved in workgroup bound business processes, any changes would be costly and disruptive. (Much to the glee of Sun and IBM).
      IBM in particular has committed a good amount of resources and money lobbying for government mandates establishing ODF as the accepted format. this would of course result in a massively disruptive and costly rip out and replace of MSOffice.
      Such are the politics of ODF.
Gary Edwards

Brendan's Roadmap Updates: My @media Ajax Keynote - 0 views

  • Standards often are made by insiders, established players, vendors with something to sell and so something to lose. Web standards bodies organized as pay-to-play consortia thus leave out developers and users, although vendors of course claim to represent everyone fully and fairly. I've worked within such bodies and continue to try to make progress in them, but I've come to the conclusion that open standards need radically open standardization processes. They don't need too many cooks, of course; they need some great chefs who work well together as a small group. Beyond this, open standards need transparency. Transparency helps developers and other categories of "users" see what is going on, give corrective feedback early and often, and if necessary try errant vendors in the court of public opinion.
    • Gary Edwards
       
      Brendan's comment about the open standards process and the control big vendors have over that process is exactly right. The standards contsortia are pay to play orgs controlled entirely by big vendors. OASIS and the OpenDocuemnt Technical Committee are not exceptions to this problematic and troublesome truth.
      The First Law of the Internet is that Interoperability trumps everything - including innovation. The problem with vendor driven open standards is that innovation ontinually trumps interoperability. So much so that interop is pretty much an after thought - as is the case with ODF and OOXML!
      The future of the Open Web will depend on open source communities banding together with governemnts and user groups to insist on the First Law of the Internet: Interoeprability. If they don't, vendors will succeed in creating slow moving web standards designed to service their product lines. Vendor product lines compete and are differentiated by innovative features. Interoeprability on the other hand is driven by sameness - the sharing of critical features. Driving innovation down into the interop layer is what the open standards process should be about. But as long as big vendors control that process, those innovations will reside at the higher level of product differentiation. A level tha tcontinues to break interoperability!
Gary Edwards

Microsoft pushes Trade Secrets Bill - 1 views

  • A spokesman for the Microsoft On The Issues website has expressed the company’s support for new legislation that would reform the legal framework for companies wishing to protect their trade secrets in a cloud-centric world where such information is frequently forced to reside on networks. In the post Microsoft’s Assistant General Counsel of IP Policy &amp; Strategy Jule Sigall rallies behind business and academic concerns supporting the proposed Defend Trade Secrets Act 2015 (DTSA), which goes before the United States Senate Judiciary Committee today. Sigall, who is also Associate General Counsel for Copyright in Microsoft’s Legal &amp; Corporate Affairs department, makes an ardent case for reform of the current legislation, as furnished by the Uniform Trade Secrets Act (UTSA). UTSA’s provisions are argued to be fractured, and rendered ineffective both by the inability of plaintiffs to pursue suits in federal courts (despite trade secret infractions being Federal by nature), and by the fact that not all states have adopted or instituted all the measures provided by the legislation. Additionally the limited provision for redress in international cases of trade secret theft are to be addressed.
  • Sigall presents the case of Microsoft’s Cortana AI as an example of why new legislation is necessary: ‘[Behind] Cortana sits a vast amount of technology developed or enhanced in-house by Microsoft – voice recognition; language translation; reactive and predictive algorithms that can synthesize context, location and data, and interface with the vast resources of the Bing search engine index; and a complex array of cloud servers to crunch and serve data in real time. This technology represents tens of thousands of hours of research, trial and error, and continued improvement as Cortana is adapted for new devices and new scenarios’
  • Sigall argues that better protection procedures for trade secrets, the only form of IP which currently lacks comprehensive cover in law, is essential for start-ups whose ideas, business plans and even customer lists may constitute the only marketable value of a company that is just in the stage of consolidating. ‘A trade secret is unique among forms of intellectual property in how it is legally protected. While it is a federal crime to steal a trade secret, a business that has its trade secrets stolen must rely on state law to pursue a civil remedy. Owners of copyrights, patents, and trademarks can go to federal court to protect their property and seek damages when their property has been infringed, but trade secret owners do not have access to such a federal remedy.’
  • ...7 more annotations...
  • Defend Trade Secrets Act 2015 contains [PDF] significant material from its doomed predecessor of 12 months ago, and one of its boldest initiatives is the extension of ex parte seizures, instituted in UTSA in a more limited form (particularly in the 1985 amendment to the Uniform Law Commission’s 1979 initial legislation). An ex parte seizure provides a kind of restraining order or injunction on disputed information, or even the dissemination of knowledge about whether the information is disputed, and places it under federal protection on the plaintiff’s behalf.
  • Microsoft had a hard time adjusting to the open source revolution, particularly in regard to the PC/Mac Office product which at one time represented the most successful and ubiquitous software in the world, and the many legal and semantic wrangles over the closed-source nature of Office formats such as Word led ultimately to a hybridised open source .docx format which is still argued to not be the OpenXML that was promised.
  • According to Sigall the state-by-state system currently in place was ‘simply not built with the digital world in mind’, and calls for ‘A uniform, national standard for protection’ which does not stop at state lines or even national borders.
  • In practical terms this seems likely to extend the circumstances under which information about leaks, hacks or thefts of information can be made the subject of gag orders for legal reasons, since it brings trade secrets into the same legal framework as other forms of intellectual property which enjoy more comprehensive coverage and recourse in law. The bill would also extend the purview of the 1996 Economic Espionage Act to take in a more rigorously conceived concept of ‘trade secrets’.
  • Even with the issues clear, the risk of disproportionate or over-reaching response in the event of the new bill passing successfully through congress in 2016 (it is unlikely to pass this year) is clear enough that the lack of network discussion about it is quite surprising. Essentially DTSA represents the same kind of proposed ‘judicial fast track’ – though in favour of corporations instead of governments – that has outraged so many commenters in the wake of the November 13th Paris attacks.
  • Silence in court Amongst its more quotidian clauses, the Defend Trade Secrets Act 2015 effectively offers corporate plaintiffs increased opportunity to federalise disputed private material in cases involving trade secrets, with all the penalties for infraction associated with that change of status – and far greater scope for sub judice orders likely to contain and conceal future breaches of information.
  • Eric Goldman of the Santa Clara University School of Law has just published a paper outlining the risks of extending ex parte seizures in the manner that DTSA 2015 proposes. Goldman writes that ‘the Seizure Provision does not solve many, if any, problems. In light of the remedies already available to trade secret owners in ex parte temporary restraining orders (TROs), the Seizure Provision purports to apply to only a narrow set of additional circumstances. In exchange for that modest benefit, the Seizure Provision creates the risk of anti-competitive seizures and seizures that cause substantial collateral damage to innocent third parties. To discourage such abuses, the Act imposes procedural safeguards and creates a cause of action for wrongful seizures. Unfortunately, those safeguards are miscalibrated to achieve the desired protections against abusive seizures.’
  •  
    Lots of possible Constitutional issues lurking. The Constitution creates only two types of intellectual property, patents and copyrights. "(P)roperty interests . . . are not created by the Constitution. Rather, they are created and their dimensions are defined by existing rules or understandings that stem from an independent source such as state law." Ruckelshaus v. Monsanto Co., 467 US 986 (1984), https://goo.gl/ZljO1H (trade secrets case). The traditional source of rights in trade secrets have been state law. Thus there is a state's rights issue lurking in this legislation, a question whether the federal government is invading the States' police power, an "our federalism" question.
Gary Edwards

Joint letter to the Open Source Community From Novell and Microsoft - 0 views

  •  
    This makes me sick. The indemnification nazis are driving a patent wedge right through the heart and soul of open source.
  •  
    This makes me sick. The indemnification nazis are driving a patent wedge right through the heart and soul of open source.
  •  
    This makes me sick. The indemnification nazis are driving a patent wedge right through the heart and soul of open source.
Gary Edwards

LOL :: Microsoft's Jean Paoli on the XML document debate - 0 views

  • What’s distinctive about the goals of OOXML? Primarily, to have full fidelity with pre-existing binary documents created in Microsoft Office. “What people want is to make sure that their billions of important documents can be saved in a format where they don’t lose any information. As a design goal, we said that those formats have to represent all the information that enables high-fidelity migration from the binary formats”, says Paoli. He mentions work with institutions including the British Library and the US Library of Congress, concerned to preserve the information in their electronic archive. I asked Paoli if such users could get equally good fidelity by converting their documents to ODF. “Absolutely not,” he says. “I am very clear on that. Those two formats are done for different reasons.” What can go wrong? Paoli gives as an example the myriad ways borders can be drawn round tables in Microsoft Office and all its legacy versions. “There are 100 ways to draw the lines around a table,” he says. “The Open XML format has them all, but ODF which has not been designed for backward compatibility, does not have them. It’s really the tip of the iceberg. So if someone translates a binary document with a table to ODF, you will lose the framing details. That is just a very small example.”
  • “Open Document Format and Office Open XML have very different goals”, says Paoli, responding to the claim that the world needs only one standard XML format for office documents. “Both of them are formats for documents … both are good.”
    • Gary Edwards
       
      The door should have been slammed shut on OOXML near five years ago when, on December 14th, 2006, at the very first OASIS ODF TC meeting, Stellent's Phil Boutros proposed that the charter include, "compatibility with existing file formats and interoperability with existing applications" as a priority objective.
  • I put it to Paoli that OOXML is hard to implement because of all its legacy support, some of which is currently not well documented. “I don’t believe that at all. It’s actually the opposite,” he says. He make the point that third parties like Corel, which have previously implemented support for binary formats like .doc and .xls, should find it easy to transition to OOXML. “We believe Open XML adoption by vendors like Corel will be very easy because they have already been doing 90% of the work, doing the binary formats. The features are already there.”
    • Gary Edwards
       
      WordPerfect does an excellent import of MSWord .doc documents. But there is no conversion! It's a read only rendering. Once you start editing the document in WP, all kinds of funny things happen, and the perfect fidelity melts away like the wicked witch of west in a bucket full of water.
  • ...5 more annotations...
  • Another benefit Paoli claims for OOXML is performance. “A lot of things are designed differently because we believe it will work faster. The spreadsheet format has been designed for very big spreadsheets because we know our users, especially in the finance industry, use very large spreadsheets.
    • Gary Edwards
       
      Wrong. The da Vinci plug-in prototype we demonstrated to Massachusetts on June 19th, 2006 proved that there is little or no difference in spreadsheet performance between a OOXML file, and an ODF file.

      In fact, ODF version of the extremely large test file beat the OOXML load by 12 seconds.

      Where the performance difference comes in is at the application level. MS Excel can load a OOXML version of a large spreadsheet faster than OpenOffice can load an ODF version of that same spreadsheet.

      If you eliminate the application differential, and load the OOXML file and the ODF version of that same spreadsheet into a plug-in enabled Excel, the performance differences are negligible.

      The reason for this is that the OOXML plug-in for Excel has a conversion overhead identical to the da Vinci plug-in for Excel. It has nothing to do with the file format, and everythign to do with the application.

      ~ge~
  • Paoli points to the conversion errors as evidence of how poorly ODF can represent legacy Office documents. My hunch is that this has more to do with the poor quality of the converter.
    • Gary Edwards
       
      Note that these OASIS ODF TC November 20th iX "interoperability enhancement" suggestions were submitted by Novell as part of their effort to perfect a OOXML plug-in for OpenOffice!!!!

      "Lists" were th first of these iX items to be submitted as formal proposal. And Sun fought that list proposal viciously for the next four months. The donnybrook resulted i a total breakdown of the ODF consensus process. But, it ensured that never again would anyone be stupid enough to challenge Sun's authority and control of the OASIS ODF TC.

      Sun made it clear that they would viciously oppose any other efforts to establish interoperability with existing Microsoft documents, applications, processes effort.

      Point taken.

      ~ge~
  • the idea that Sun is preparing a reference implementation of OOXML is laughable.
    • Gary Edwards
       
      Sorry Tim. It's true. Sun and Novell are working together to develop native OOXML file support in OpenOffice. You can find this clearly stated in the Gullfoss Planet OpenOffice blogs.

      The funny thing is that Sun will have to implement and support the November 20th iX enhancements submitted by Novell!! (Or, the interoperability frameworks also submitted by Novell in February of 2007). There is simply no other way for OpenOffice to implement OOXML with the needed fidelity.

      ~ge~
  • One of new scenarios enabled by the “custom xml parts” (again, if you read their blogs, you must have heard of this stuff) is the ability to bind xml sources and a control+layout so that it enables the equivalent of data queries (we’ve had in Excel for many years already), just with a source which is part of the package, contrary to the typical external data source connection. Well this stuff, besides the declaration (which includes, big surprise, GUIDs and stuff like that) requires the actual Office 2007 run-time to work. So whenever MS says this stuff is interoperable, they cannot mean you can take this stuff away in another application. Because you can’t. This binding is more or less the same than the embedding of VBA macros. It’s all application-specific, and only Microsoft’s own suite knows how to instantiate this stuff.
    • Gary Edwards
       
      Stephan whacks this one out of the park! Smart Documents will replace VBa scripts, macros and OLE functionality going forward. It's also the data binding - workflow and metadata model of the future. And it's all proprietary!

      It's the combination of OOXML plus the MSOffice- Vista Stack specific Smart Documents that will lock end users into the Vista Stack for years to come.

      Watch out Google!

      ~ge~
  • Has Microsoft published the .doc spec publicly? Then why should ODF worry about the past? It’s not ODF’s concern to worry about Microsoft’s past formats. (Understand that the .doc format alone changed six times in the last eight versions of Office!) That’s Microsoft’s legacy problem, not ODF’s.
    • Gary Edwards
       
      There really is no need to access the secret binary blueprints. The ACME 376 plug-in demonstration proves this conclusively. The only thing the ACME 376 demo lacks is that we didn't throw the switch on the magic key to release all VBa scripts, macros and OLE bindings to ACME. But that can be done if someone is serious about converting the whole shebang of documents, applications and processes.

      The real problem is that although ACME 376 proves we can hit the high fidelity required, it is impossible to effectively capture that fidelity in ODF without the iX interoperability enhancements. The world expects ODF interoperability. But as long as Sun opposes iX, we can't pipe from ACME 376 to ODF.

      ~ge~
  •  
    Tim Anderson interviews Microsoft's Jean Paoli about MOOXML and ODF.    Jean Paoli of course has the predictable set of answers.  But Tim anderson provides us with some interesting insights and comments of his own.  There is also a gem of a comment from Stephane Rodriquez, the reknown spreadsheet expert.

    The bottom line for Microsoft has not changed.  MOOXML exists because of the need for an XML file format compatible with the legacy of existing MSOffic ebinary documents.  He claims that ODF is not compatible, and offers the "page borders" issue as an example.

    Page borders?  What's that got to do with the ODF file format?   These are application specific, application bound proprietary graphics that can not be ported to any other application - like OpenOffice.  The reason has nothign whatsoever to do with ODF and everything to do with the fact that the page border library is bound to MSOffice and not available to other applications like OpenOffice. 

    So here is an application specific feature tha tJean Paoli claims can not be expressed in ODF, but can in MOOXML.  But when are running the da Vinci ODF plugin in MSWord, there is no problem whatsoever in capturing the page borders in ODF!!!!!!!!!!!!!!!!!!!!!!!!!!!  No problem!!!!!!!!!!

    The problem is opening up that same da Vinci MSWord document in OpenOffice.  That's where the page borders are dropped.  The issue is based entirely on the fact that OpenOffice is unable to render these MSWord specific graphics bound to an MSOffice only library.

    If however we take that same page border loaded da Vinci MSWord document, and send it half way across the world to another MSWord desktop running da Vinci, the da Vinci plugin easily loads the ODF document into MSWord where it is perfectly rendered, page borders and all!!!!!!!!

    Now i will admit that this is one very difficult issue to understand.  If not f
  •  
    Great interview. Tim can obviously run circles around poor Jean Paoli.
Gary Edwards

LibreOffice 4.3 boosts document compatibility | InfoWorld - 0 views

  •  
    "Version 4.3 of LibreOffice, the free and open source productivity suite developed by the Document Foundation and derived from the OpenOffice.org project, was released today. Aside from the usual array of bug fixes and new features designed to make it more cross-compatible with Microsoft Office, version 4.3 has features that give files from legacy Macintosh productivity software a new lease on life. Take control! 30 essential OS X command-line tips Go beyond the graphical user interface and take full advantage of Mac OS X at the command line READ NOW Most of the improvements around file handling in 4.3 involve better support for various aspects of the Office Open XML (OOXML) format used by Microsoft for its productivity software. LibreOffice users have often complained of opening Word 2010 or Word 2013 documents and finding that the formatting had been mangled or features like annotations hadn't survive being resaved in LibreOffice. Version 4.3 preserves many more of the attributes used in OOXML documents, such as style attributes for text and images. Also new to this edition of LibreOffice is import support for document formats created by a slew of legacy Macintosh applications: BeagleWorks, ClarisWorks, Claris Resolve, GreatWorks, MacWorks, SuperPaint, and Wingz. Likewise, Microsoft Works spreadsheets and databases -- not just word processing documents -- can now also be imported into LibreOffice. Another change, which might not directly affect many users but hints at how the refactoring of LibreOffice's code is reaching many legacy issues, involves the lengths of paragraphs. Previously, paragraphs in a LibreOffice document couldn't exceed 65,000 characters due to a bug in the underlying OpenOffice.org code that had persisted for over a decade and remained unclosed. Other changes include comments that can now be "printed in the document margin, formatted in a better way, and imported and exported," according to the Document Foundation; better behaviors for sp
1 - 20 of 62 Next › Last »
Showing 20 items per page