Skip to main content

Home/ Document Wars/ Group items tagged semantic

Rss Feed Group items tagged

Gary Edwards

XML.com: Standard Data Vocabularies Unquestionably Harmful - 0 views

  • At the onset of XML four long years ago, I commenced a jeremiad against Standard Data Vocabularies (SDVs), to little effect. Almost immediately after the light bulb moment -- you mean, I can get all the cool benefits of web in HTML and create my own tags? I can call the price of my crullers <PricePerCruller>, right beside beside <PricePerDonutHole> in my menu? -- new users realized the problem: a browser knows how to display a heading marked as <h1> bigger and more prominently than a lowlier <h3>. Yet there are no standard display expectations or semantics for the XML tags which users themselves create. That there is no specific display for <Cruller> and, especially, not as distinct from <DonutHole> has been readily understood to demonstrate the separation of data structure expressed in XML from its display, which requires the application of styling to accomodate the fixed expectations of the browser. What has not been so readily accepted is that there should not be a standard expectation for how a data element, as identified by its markup, should be processed by programs doing something other than simple display.
    • Gary Edwards
       
      ODF and OOXML are contending to become the Standard Data Vocabulary for desktop office suite XML markup. Sun and Microsoft are proposing the standardization of OpenOffice and MSOffice custom defined XML tags for which there are no standard display expectations. The display expectations must therefore be very carefully described: i.e. the semantics of display fully provided.
      In this article Walter Perry is pointing out the dangers of SDV's being standardized for specific purposes without also having well thought out and fully specified display semantics. In ODF - OOXML speak, we would call display presentation, or layout, or "styles".
      The separation of content and presentation layer of each is woefully underspecified!
      Given that the presnetation layers of both ODF and OOXML is directly related to how OpenOffice and MSOffice layout engines work, the semantics of display become even more important. For MSOffice to implement an "interoperable" version of OpenOffice ODF, MSOffice must be able to mimic the OpenOffice layout engine methods. Methods which are of course quite differeent from the internal layout model of MSOffice. This differential results in a break down of conversion fidelity, And therein lies the core of the ODF interoeprability dilemma!
  • There have also emerged a few "horizontal" data vocabularies, intended for expressing business communication in more general terms. One of these is the eXtensible Business Reporting Language (XBRL), about which more below. Most recently, governments and governmental organizations have begun to suggest and eventually mandate particular SDVs for required filings, a development which expands what troubles me about these vocabularies by an order of magnitude.
  • ...5 more annotations...
    • Gary Edwards
       
      Exactly! When governments mandate a specific SDV, they also are mandating inherent concepts and methods unique to the provider of the SDV. In the case of ODF and OOXML, where the presentation layers are application specific and woefully underspecified, interoperability becomes an insurmountable challenge. Interop remains stubbornly application bound.
      Furthermore, there is no way to "harmonize" or "map" from one format to another without somehow resolving the application specific presentation differences.
    • Gary Edwards
       
      "in the nature of the SDV's themselves is the problem of misstatement, of misdirection of naive interpretation, and potential for fraud.
      Semantics matter! The presentation apsects of a document are just as important as the content.
    • Gary Edwards
       
      Walter: "I have argued for years that, on the basis of their mechanism for elaborating semantics, SDVs are inherently unreliable for the transmission or repository of information. They become geometrically less reliable when the types or roles of either the sources or consumers of that information increase, ending at a nightmarish worst case of a third-order diminution of the reliability of information. And what is the means by which SDVs convey meaning? By simple assertion against the expected semantic interpretations hard-coded into a process consuming the data in question.
      At this point in the article i'm hopign Walter has a solution. How do we demand, insist and then verify that SDV's have fully specifed the semantics, and not jus tpassed along the syntax?
      With ODF and OOXML, this is the core of the interoperability problem. Yet, there really is no way to separate the presentation layers from the uniquely different OpenOffice and MSOffice layout engine models.
    • Gary Edwards
       
      Interesting concept here: "the bulk of expertise is in understanding the detail of connections between data and the processes which produced it or must consume it ........ it is these expert connections which SDV's are intended to sever.
      Not quite sure what to make of that statement? When an SDV is standardized by ISO, the expectation is that the connections between data and processes would be fully understood, and implementations consistent across the board.
      Sadly, ODF is ISO approved, but doesn't come close to meeting these expectations. ODF interop might as well be ZERO. And the only way to fix it is to go into the presentation layer of ODF, strip out all the application specific bindings, and fully specifiy the ssemantics of layout.
  • In short, the bulk of expertise is in understanding the detail of connections between data and the processes which produced it or must consume it. It is precisely these expert connections which standard data vocabularies are intended to sever.
Gary Edwards

Developing a Universal Markup Solution For Web Content - 0 views

  •  
    KODAXIL To Replace XML?\n

    \nFile this one under the Universal Interoperability label. Very interesting. Especially since XML document formats have proven to fall short on the two primary expectations of users: interoperability and Web ready. Like HTML+ :) Maybe KODAXIL will work?\n

    \nThe recent Web 2.0 Conference was filled with new web services , portals and wiki efforts trying their best to mash data into document objects. iCloud, MindTouch, AppLogic, 3Tera, Caspio and Gazoodle all deserve attention. although each took a rather different approach towards solving the problem. MindTouch in particular was excellent.\n

    \n"A Montreal-based software and research development company has developed a markup solution and language-neutral asset-descriptor that when fully developed, could result in a universal computer language for representing information in databases, web and document contents and business objects."\n

    \n"While still at a seminal stage of development, the company Gnoesis, aims to address the problem of data fragmentation caused by semantic differences between developers and users from different linguistic backgrounds."\n

    \nGnoesis, the company that has developed the language called KODAXIL (Knowledge, Object, Data, Action, and eXtensible Interoperable Language), a data and information representation language, says the new language will replace the XML function of consolidating semantically identical data streams from different languages, by creating a common language to do this.\n

    \nThe extensible semantic markup associated with this language will be understood worldwide and is three times shorter than XML.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gary Edwards

» Web inventor Tim Berners-Lee Unplugged: Semantic Web better than APIs for d... - 0 views

  • the general idea is for there to be a layer of data on the Internet that he calls the “data bus” and the way the data bus works is not too different from how we’ve heard Microsoft’s WinFS filesystem described where connectivity between related data items is organic rather than synthesized. For example, whereas today, a mashup developer may have to call upon two APIs to show where a specific Starbucks is on a map, the Semantic Web approach might involve little more than a simple query of that data bus using a query technology called SparQL.
  •  
    Great explanation of the Semantic Web, RDF, SparQL versus big vendor Web API's
Gary Edwards

Harmonizing ODF and OOXML using NameSpaces | Tim Bray's Thought Experiment - 0 views

  • First, what if Microsoft really is doing the right thing? Second, how can we avoid having two incompatible file formats? [Update: There’s been a lot of reaction to this piece, and I addressed some of those points here.]
  • On the technology side, the two formats are really more alike than they are different. But, there are differences: O12X’s design center, Microsoft has said repeatedly, is capturing the exact semantics of the billions of existing Microsoft Office documents. ODF’s design center is general-purpose reusability, and leveraging existing standards like SVG and MathML and so on.
    • Gary Edwards
       
      OOXML, or to put it more accurately "O12X" as Tim suggests, is designed to capture the exact semantics of MSOffice 12. In fact, OOXML is an XML encoding of the MSOffice 12 in-memory-binary-representation dump. When it comes to representing older versions of MSOffice documents, OOXML must use legacy compatibility settings" to capture the semantics. And it's not an exacting science to say the least. The thing is, OpenOffice ODF uses the same technique resulting in application specific ODF documents with over 150 un docuemnted, unspecified "compatibility settings". After years of requests from the OASIS ODF Technical Committee to document these application specific settings, Sun has yet to provide any kind of response. And this kills ODF interoperability. Especially concerning KOffice. There is also the issue of OASIS ODF high-jacked namespaces. When ODF applications reference a namespace, the actual URL is high-jacked with http://oasis-open.org/???? replacing the proper namespace of http://W3C.org/???? This high-jacking impacts the oDF reuse of important W3C technologies such as XForms, SVG, MathML, and SMiL. So where's the problem you ask? Well, when a developer imports or tries to process an OpenOffice ODF document, they rely on say the W3C XForms specification for their understanding. OpenOffice however seriously constrains the implementation of XForms, SVG, MathML, RDFa and RDF/XML. This should be reflected in the new namespace. However, if you follow the high-jacked URL, you'll find that there is nothing there. There is no specification describing how OpenOffice implements XForms in ODF! This breaks developer libraries, breaks ODF interoperability between ODF applications, and, offends the W3C to no end. So i think it might be fair to say that at this point, neither ODF or OOXML have come close to fulfilling their design objectives.
  • The capabilities of ODF and O12X are essentially identical for all this basic stuff. So why in the flaming hell does the world need two incompatible formats to express it? The answer, obviously, is, “it doesn’t”.
    • Gary Edwards
       
      Exactly!! Except for one thing that Tim misses: the presentation layers of both ODF and OOXML are application specific. It is also the application specific nature of OpenOffice ODF presnetation layer that prevents interoperability with KOffice ODF! There is near zero interop between OpenOffice and KOffice, and KOffice has been a contributing member of the OASIS ODF TC for FIVE YEARS! It's the presentation layer Tim. ODF and OOXML are application specific formats because their presentation layers are woefully applicaiton specific and entirely reflective of each applications layout engine and feature set implementation model. I often imagine what ODF would be like if back in 2001, Sun had chosen to implement CSS as the OpenOffice presentation layer instead of the quirky but innovative, and 100% application specific automatic-styles presentation layer we now see in ODF. Unlike ODF's "automatic-styles", CSS is a totally application independent presentation model prized exactly for it's universal interoperability!
  • ...1 more annotation...
  • The ideal outcome would be a common shared office-XML dialect for the basics—and it should be ODF (or a subset), since that’s been designed and debugged—then another extended vocabulary to support Microsoft features , whether they’re cool new whizzy features or mouldy old legacy features (XML Namespaces are designed to support exactly this kind of thing). That way, if you stayed with the basic stuff you’d never need to worry about software lock-in; the difference between portable and proprietary would be crystal-clear. And, for the basic stuff that everybody uses, there’d be only one set of tags. This outcome is technically feasible. Who could possibly be against it?
    • Gary Edwards
       
      Bingo! ODF and OOXML should strip off the application specific complexities and seek a neutral generic XML representation of basic document structures common to ALL documents. Then, use the XML NameSpace mechanism to extend (with proper descriptions) the generic to include the volumes of application specific features that now fill each format. One thing i disagree with Tim about. And that's that the interop of ODF and OOXML is hopelessly broken. The OpenDocument Foundation tried for over a year to close the compatibility gap between ODF and MSOffice binary - xml documents. The OASIS ODF TC would have none of it. IBM and Sun are set on a harsh course of highly disruptive and costly rip-out-and-replace of MSOffice based on government mandates for ODF. There is no offer of compromise to be had. On the Microsoft side, even if they did want to compromise (a big IF), there is that problem of over 550 million MSOffice workgroup-workflow desktops to contend with. The thing is, the only way to harmonize, merge, convert or translate between two application specific formats is to actually harmonize the applications themselves. While the generic subset is a worthy goal, the process would be fraught with real world concerns that the existing application workflows are not disrupted. My proposal? Demand that ODF and OOXML application vendors provide format options for PDF, and the W3C's family of formats: (X)HTML5, (X)HTML - CSS, and CDF (XHTML-CSS-XForms-SVG-SMil-MathML). That will do it. We might never see the quality of interoperability we had hoped for in a desktop application to application scenario. But we can and should fully expect high quality interop at the higher level of the Web. You can convert an application specific format to a generic like CDF. By setting up conversion channels to the same CDF profile within MSOffice, OpenOffice, KOffice, Symphony, and Google Docs, we can achieve the universal interoperabil
Gary Edwards

What is RDF and what is it good for? - 0 views

  • On the Semantic Web, computers do the browsing for us. The SemWeb enables computers to seek out knowledge distributed throughout the Web, mesh it, and then take action based on it. To use an analogy, the current Web is a decentralized platform for distributed presentations while the SemWeb is a decentralized platform for distributed knowledge. RDF is the W3C standard for encoding knowledge.
  •  
    Exellent introduction to RDF and the Semantic Web!
Gary Edwards

Brian Jones: Open XML Formats : Mapping documents in the binary format (.doc; .xls; .pp... - 0 views

  • The second issue we had feedback on was an interest in the mapping from the binary formats into the Open XML formats. The thought here was that the most effective way to help people with this was to create an open source translation project to allow binary documents (.doc; .xls; .ppt) to be translated into Open XML. So we proposed the creation of a new open source project that would map a document written using the legacy binary formats to the Open XML formats. TC45 liked this suggestion, and here was the TC45 response to the national body comments: We believe that Interoperability between applications conforming to DIS 29500 is established at the Office Open XML-to- Office Open XML file construct level only.
    • Gary Edwards
       
      And here i was betting that the blueprints to the secret binaries would be released the weekend before the September 2nd, 2007 ISO vote on OOXML! Looks like Microsoft saved the move for when they really had to use it; jus tweeks before the February ISO Ballot Resolution Meetings set to resolve the Sept 2nd issues. The truth is that years of reverse engineering have depleted the value of keeping the binary blueprints secret. It's true that interoperability with MSOffice in the past was near entirely dependent on understanding the secret binaries. Today however, with the rapid emergence of the Exchange/SharePoint juggernaught, interop with MSOffice is no longer the core issue. Now we have to compete with E/S, and it is the E/S interfaces, protocols and document API's and dependencies tha tmust be reverse engineered. The E/S juggernaught is now surging to 70% or more of the market. These near monopoly levels of market penetration is game changing. One must reverse engineer or license the .NET libraries to crack the interop problem. And this time it's not just MSOffice. Today one must crack into the MS Stack whose core is tha tof MSOffice <> E/S. So why not release the secret binary blueprints? If that's the cost of getting the application, platform and vendor specific OOXML through ISO, then it's a small price to pay for your own international standard.
  •  
    Well well well. We knew that IBM had access to the secret binary blueprints back in 2006. Now we know that Sun ALSO had access!
    And why is this important? In June of 2006, Massachusetts CIO Louis Gutierrez asked the OpenDocument Foundation's da Vinci Group to work with IBM on developing the da Vinci ODF plug-in clone of Microsoft's OOXML Compatibility Pack plug-in. When we met with IBM they were insistent that the only way OASIS ODF could establish sufficient compatibility with MSOffice and the billions of binary documents would be to have the secret blueprints open.
    Even after we explained to IBM that da Vinci uses the same internal conversion process that the OOXML plug-in used to convert binaries, IBM continued to insist that opening up the secret binaries was a primary objective of the OASIS ODF community.
    For sure this was important to IBM and Sun, but the secret binaries were of no use to us. da Vinci didn't need them. What da Vinci needed instead was a subset of ODF designed for the conversion of those billions of binary documents! A need opposed by Sun.
    Sun of course would spend the next year developing their own ODF plug-in for MSOffice. But here's the thing: it turns out that Sun had complete access to the secret binary blueprints dating back to 2006!!!!!!
    So even though IBM and Sun have had access to the blueprints since 2006, they have been unable to provide effective conversions to ODF!
    This validates a point the da Vinci group has been trying to make since June of 2006: the problem of perfecting a high fidelity conversion between the billions of binaries and ODF has nothing to do with access to the secret binary blueprints. The real issue is that ODF was NOT designed for the conversion of those binary documents.
    It is true that one could eXtend ODF to achieve the needed compatibility. But one has to be very careful before taking this ro
Gary Edwards

Independent study advises IT planners to go OOXML - 0 views

  • From: Bill Gates Sent: Saturday, December 5 1998 To: Bob Muglia, Jon DeVann, Steven Sinofsky Subject : Office rendering "One thing we have got to change in our strategy - allowing Office documents to be rendered very well by other peoples browsers is one of the most destructive things we could do to the company. We have to stop putting any effort into this and make sure that Office documents very well depends on PROPRIETARY IE capabilities. Anything else is suicide for our platform. This is a case where Office has to avoid doing something to destroy Windows. I would be glad to explain at a greater length. Likewise this love of DAV in Office/Exchange is a huge problem. I would also like to make sure people understand this as well." Tuesday, August 28, 2007
  • 3.2.2.2. A pox on both your houses! gary.edwards - 01/22/08 Hi Robert, What you've posted are examples of MSOffice ”compatibility settings” used to establish backwards compatibility with older documents, and, for the conversion of alien file formats (such as various versions of WordPerfect .wpd). These compatibility settings are unspecified in that we know the syntax but have no idea of the semantics. And without the semantic description there is no way other developers can understand implementation. This of course guarantees an unacceptable breakdown of interoperability. But i would be hesitant to make my stand of rejecting OOXML based on this issue. It turns out that there are upwards of 150 unspecified compatibility settings used by OpenOffice/StarOffice. These settings are not specified in ODF, but will nevertheless show up in OpenOffice ODF documents – similarly defying interoperability efforts! Since the compatibility settings are not specified or even mentioned in the ODF 1.0 – ISO 26300 specification, we have to go to the OOo source code to discover where this stuff comes from. Check out lines 169-211. Here you will find interesting settings such as, “UseFormerLineSpacing, UseFormerObjectPositioning, and UseFormerTextWrapping”. So what's going on here?
    • Gary Edwards
       
      ..... response to Robert Crocker concerning Mary Jo's article, "Independent study advises IT planners to go OOXML". 3.2.2. So this is well documented? Robert Crocker - 01/14/08 : Mind explaining these functions to us then? - Section 2.15.3.6 page 2161, autoSpaceLikeWord95. - Section 2.15.3.26 page 2199, footnoteLayoutLikeWW8. - Section 2.15.3.31 page 2209, lineWrapLikeWord6. - Section 2.15.3.32 page 2210, mwSmallCaps. - Section 2.15.3.41 page 2225, shapeLayoutLikeWW8. - Section 2.15.3.51 page 2245, suppressTopSpacingWP
Gary Edwards

A pox on both your houses! | Independent study advises IT planners to go OOXML - 0 views

  • What you've posted are examples of MSOffice ”compatibility settings” used to establish backwards compatibility with older documents, and, for the conversion of alien file formats (such as various versions of WordPerfect .wpd). These compatibility settings are unspecified in that we know the syntax but have no idea of the semantics. And without the semantic description there is no way other developers can understand implementation. This of course guarantees an unacceptable breakdown of interoperability. But i would be hesitant to make my stand of rejecting OOXML based on this issue. It turns out that there are upwards of 150 unspecified compatibility settings used by OpenOffice/StarOffice. These settings are not specified in ODF, but will nevertheless show up in OpenOffice ODF documents – similarly defying interoperability efforts! Since the compatibility settings are not specified or even mentioned in the ODF 1.0 – ISO 26300 specification, we have to go to the OOo source code to discover where this stuff comes from. Check out lines 169-211. Here you will find interesting settings such as, “UseFormerLineSpacing, UseFormerObjectPositioning, and UseFormerTextWrapping”.
Gary Edwards

Microsoft Watch Finally Gets it - It's the Business Applications!- Obla De OBA Da - 0 views

  • To be fair, Microsoft seeks to solve real world problems with respect to helping customers glean more value from their information. But the approach depends on enterprises adopting an end-to-end Microsoft stack—vertically from desktop to server and horizontally across desktop and server products. The development glue is .NET Framework, while the informational glue is OOXML.
    • Gary Edwards
       
      OOXML is the transport - a portable XML document model where the "document" is the interface into content/data/ and media streaming.

      The binding model for OOXML is "Smart Documents", and it is proprietary!

      Smart Documents is how data, streaming media, scripting-routing-workflow intelligence and metadata is added to any document object.

      Think of the ODF binding model using XForms, XML/RDF and RDFA metadata. One could even use Jabber XMP as a binding model, which is how we did the Comcast SOA based Sales and Inventory Management System prototype.

      Interestingly, Smart Documents is based on pre written widgets that can simply be dragged, dropped and bound to any document object. The Infopath applicaiton provides a highly visual means for end users to build intelligent self routing forms. But Visual Studio .NET, which was released with MSOffice 2007 in December of 2006. makes it very easy for application and line of business integration developers to implement very advanced data binding using the Smart Document widgets.

      I would also go as far to say that what separates MSOOXML from Ecma 376 is going to be primarily Smart Documents.

       Yes, there are .NET Framework Libraries and Vista Stack dependencies like XAML that will also provide a proprietary "Vista Stack" only barrier to interoperability, but Smart Documents is a killer.

      One company that will be particularly hurt by Smart Documents is Google. The reason is that the business value of Google Search is based on using advanced and closely held proprietary algorithms to provide metadata structure for unstrucutred documents.

      This was great for a world awash in unstructured documents. By moving the "XML" structuring of documents down to the author - workgroup - workflow application level though, the world will soon enough be awash in highly structured documents that have end user metadata defining document objects and
  • Microsoft seeks to create sales pull along the vertical stack between the desktop and server.
    • Gary Edwards
       
      The vertical stack is actually desktop - server - device - web based.  The idea of a portable XML document is that it must be able to transition across the converged application space of this sweeping stack model.

      Note that ODF is intentionally limited to the desktop by it's OASIS Charter statement.  One of the primary failings of ODF is that it is not able to be fully implemented in this converged space.  OOXML on the other hand was created exactly for this purpose!

      So ODF is limited to the desktop, and remains tightly bound to OpenOffice feature sets.  OOXML differs in that it is tightly bound to the Vista Stack.

      So where is an Open Stack model to turn to?

      Good question, and one that will come to haunt us for years to come.  Because ODF cannot move into the converged space of desktop to server to device to the web information systems connected through portable docuemnt/data transport, it is unfit as a candidate for Universal File Format.

      OOXML is unfi as a UFF becuase it is application - platform and vendor bound.

      For those of us who believe in an open and unencumbered universal file format, it's back to the drawing board.

      XHTML+ (XHTML + CSS3 + RDF) is looking very good.  The challenge is proving that we can build plugins for MSOffice and OpenOffice that can fully implement XHTML+.  Can we conver the billions of binary legacy documents and existing MSOffice bound business processes to XHTML+?

      I think so.  But we can't be sure until the da Vinci proves this conclusively.

      One thign to keep in mind though.  The internal plugins have already shown that it is possible to do multiple file formats.  OOXML, ODF, and XML encoded RTF all have been shown to work, and do so with a level of two way conversion fidelity demanded by existing business processes.

      So why not try it with XHTML+, or ODEF (the eXtended version of ODF en
  • Microsoft's major XML-based format development priority was backward compatibility with its proprietary Office binary file formats.
    • Gary Edwards
       
      This backwards compatibility with the existing binary file formats isn't the big deal Micrsoft makes it out to be.  ODF 1.0 includes a "Conformance Clause", (Section 1.5) that was designed and included in the specification exactly so that the billions of binary legacy documents could be converted into ODF XML.

      The problem with the ODF Conformance Clause is that the leading ODF application, OpenOffice,  does not fully support and implement the Conformance Clause. 

      The only foreign elements supported by OpenOffice are paragraphs and text spans.  Critically important structural document characteristics such as lists, fields, tables, sections and page breaks are not supported!

      This leads to a serious drop in conversion fidelity wherever MS binaries are converted to OpenOffice ODF.

      Note that OpenOffice ODF is very different from MSOffice ODF, as implemented by internal conversion plugins like da Vinci.  KOffice ODF and Googel Docs ODF are all different ODF implementations.  Because there are so many different ways to implement ODF, and still have "conforming" ODF documents, there is much truth to the statement that ODF has zero interoperabiltiy.

      It's also true that OOXML has optional implementation areas.  With ODF we call these "optional" implementation areas "interoperabiltiy break points" because this is exactly where the document exchange  presentation fidelity breaks down, leaving the dominant market ODF applicaiton as the only means of sustaining interoperabiltiy.

      With OOXML, the entire Vista Stack - Win32 dependency layer is "optional".  No doubt, all MSOffice - Exchange/SharePoint Hub applications will implement the full sweep of proprietary dependencies.    This includes the legacy Win32 API dependencies (like VML, EMF, EMF +), and the emerging Vista Stack dependencies that include Smart Documents, XAML, .NET 3.0 Libraries, and DrawingML.

      MSOffice 2007 i
  • ...6 more annotations...
  • Microsoft's backwards compatibility priority means the company made XML-based format decisions that compromise the open objectives of XML. Open Office XML is neither open nor XML.
    • Gary Edwards
       
      True, but a tricky statement given that the proprietary OOXML implementation is "optional".  It is theoretically possible to implement Ecma 376 without the prorpietary dependencies of MSOffice - Exchange/SharePoint Hub - Vista Stack "OOXML".

      In fact, this was first demonstrated by the legendary document processing - plugin architecture expert, Florian Reuter.

      Florian has the unique distinction of being the primary architect for two major plugins: the da Vinci ODF plugin for MSOffice, and, the Novell OOXML Translator plugin for OpenOffice!

      It is the Novell OOXML Translator Plugin for OpenOffice that first demonstrated that Ecma 376 could be cleanly implemented without the MSOffice application-platform-vendor specific dependencies we find in every MSOffice OOXML document.

      So while Joe is technically correct here, that OOXML is neither open nor XML, there is a caveat.  For 95% of all desktops and near 100% of all desktops in a workgroup, Joe's statment holds true.  For all practical concerns, that's enough.  For Microsoft's vaunted marketing spin machine though, they will make it sound as though OOXML is actually open and application-platform-vendor independent.


  • Microsoft got there first to protect Office.
    • Gary Edwards
       
      No. I disagree. Microsoft needs to move to XML structured documents regardless of what others are doing. The binary document model is simply unable to be useful to any desktop- to server- to device- to the web- transport!

      Many wonder what Microsoft's SOA strategy is. Well, it's this: the Vista Stack based on OOXML-Smart Documents-.NET.

      The thing is, Microsoft could not afford to market a SOA solution until all the proprietary solutions of the Vista Stack were in place.

      The Vista Stack looks like this:

      ..... The core :: MSOffice <> OOXML <> IE <> The Exchange/SharePoint Hub

      ..... The services :: E/S HUb <> MS SQL Server <> MS Dynamics <> MS Live <> MS Active Directory Server <> MSOffice RC Front End

      The key to the stack is the OOXML-Smart Documents capture of EXISTING MSOffice bound business processes and documents.

      The trick for Microsoft is to migrate these existing business processes and documents to the E/S Hub where line of business developers can re engineer aging desktop LOB apps.

      The productivity gains that can be had through this migration to the E/S Hub are extraordinary.

      A little over a year ago an E/S Hub verticle market application called "Agent Achieve" came out for the real estate industry. AA competed against a legacy of twenty years of contact management based - MLS data connected desktop shrinkware applications. (MLS-Multiple Listing Service)

      These traditional desktop client/server productivity apps defined the real estate business process as far as it could be said to be "digital".  For the most part, the real estate transaction industry remains a paper driven process. The desktop stuff was only useful for managing clients and lead prospecting. No one could crack the electronic documents - electonic business transaction model.  This will no doubt change with the emer
  • Microsoft can offer businesses many of the informational sharing and mining benefits associated with the markup language while leveraging Office and supporting desktop and server products as the primary consumption conduit.
    • Gary Edwards
       
      Okay, now Joe has the Micrsoft SOA bull by the horns.  Why doesn't he wrestle the monster down?
  • By adapting XML
    • Gary Edwards
       
      The requirements of these E/S Hub systems are XP, XP MSOffice 2003 Professional, Exchange Server with OWL (Outlook on the Web) , SharePoint Server, Active Directory Server, and at least four MS SQL Servers!

      In Arpil of 2006, Microsoft issued a harsh and sudden End-of-Life for all Windows 2000 - MSOffice 2000 systems in the real estate industry (although many industries were similarly impacted). What happened is that on a Friday afternoon, just prior to a big open house weekend, Microsoft issued a security patch for all Exchange systems. Once the patch was installed, end users needed IE 7.0 to connect to the Exchange Server Systems.

      Since there is no IE 7.0 made for Windows 2000, those users relying on E/S Hub applications, which was the entire industry, suddenly found themselves disconnected and near out of business.

      Amazingly, not a single user complained! Rather than getting pissed at Microsoft for the sudden and very disruptive EOL, the real estate users simply ran out to buy new XP-MSOffice 2003 systems. It was all done under the rational that to be competitive, you have to keep up with technology systems.

      Amazing. But it also goes to show how powerfully productive the E/S Hub applications can be. This wouldn't have happened if the E/S Hub applications didn't have a very high productivity value.

      When we visited Massachusetts in June of 2006, to demonstrate and test the da Vinci ODF plugin for MSOffice, we found them purchasing en mass E/S Hubs! These are ODF killers! Yet Microsoft sales people had convinced Massachusetts ITD that Exchange/SahrePoint was a simple to use eMail-calendar-portal system. Not a threat to anyone!

      The truth is that in the E/S Hub ecosystem, OOXML is THE TRANSPORT. ODF is a poor, second class attachment of no use at the application - document processing chain level.

      Even if Massachusetts had mandated ODF, they were only one E/S Hub Court Doc
  • Microsoft will vie for the whole business software stack, a strategy that I believe will be indisputable by early 2009 at the latest.
    • Gary Edwards
       
      Finally, someone who understands the grand strategy of levergaing the desktop monopoly into the converged space of server, device and web information systems.

      What Joe isn't watching is the way the Exchange/SharePoint Server connects to MS SQL Server, Active Directory Server, MS LIve and MS Dynamics.

      Also, Joe does not see the connection between OOXML as the portable XML document/data transport, and the insidiously proprietary Smart Documents metadata - data binding system that totally separates MSOOXML from Ecma 376 OOXML!
  • I'm convinced that Office as a platform is an eventual dead end. But Microsoft is going to lead lots of customers and partners down that platform path.
    • Gary Edwards
       
      Yes, but the new platform for busines process development is that of MSOffice <> Exchange/SharePoint Hub.

      The OOXML-Smart Docs transport replaces the old binary document with OLE and VBA Scripts and Macros functionality.  Which, for the sake of brevity we can call the lead Win32 API dependencies.

      One substantial difference is that OOXML-Smart Docs is Vista Stack ready, while the Win32 API dependencies were desktop bound.

      Another way of looking at this is to see that the old MSOffice platform was great for desktop application integration.  As long as the complete Win32 API was available (Windows + MSOffice + VBA run times), this platform was great for workgroups.  The Line of Business integrated apps were among the most brittle of all client/server efforts, bu they were the best for that generation.

      The Internet offers everyone a new way of integrating data, content and streaming media.  Web applications are capable of loosly coupled serving and consuming of other application services.  Back end systems can serve up data in a number of ways: web services as SOAP, web services as AJAX/REST, or XML data streams as in HTTPXMLRequest or Jabber P2P model.

      On the web services consumption side, it looks like AJAX/REST will be the block buster choice, if the governance and security issues can be managed.

      Into this SOA mash Microsoft will push with a sweeping integrated stack model.  Since the Smart Docs part of the OOXML-Samrt Docs transport equation is totally proprietary, but used throughout the Vista Stack, it will provide Microsoft with an effective customer lockin - OSS lockout point.

Gary Edwards

Independent study advises IT planners to go OOXML | All about Microsoft | ZDNet.com - 0 views

  • “ODF represents laudable design and standards work. It’s a clean and useful design, but it’s appropriate mostly for relatively unusual scenarios in which full Microsoft Office file format fidelity isn’t a requirement. Overall, ODF addresses only a subset of what most organizations do with productivity applications today.” The report continues: “ODF is insufficient for complex real-world enterprise requirements, and it is indirectly controlled by Sun Microsystems, despite also being an ISO standard. It’s possible that IBM, Novell, and other vendors may be able to put ODF on a more customer-oriented trajectory in the future and more completely integrate it with the W3C content model, but for now ODF should be seen as more of an anti-Microsoft political statement than an objective technology selection.”
    • Gary Edwards
       
      Mary Jo takes on the recently released Burton Group Report comparing OOXML and ODF. Peter O'Kelly, one of the Burton Group authors, once famously said, "ODF is a great format if you live in an alternative universe where MSOffice doesn't exist!" This observation speaks to the core problem facing ODF and those who seek to implement the ODF standard: ODF was not designed for the conversion of MSOffice documents. Nor was ODF designed to work with MSOffice applications. Another way of saying this is to state that ODF was not designed to be interoperable with MSOffice documents, applications and bound processes. The truth is that ODF was designed for OpenOffice/StarOffice. It is an application specific format. Both OOXML and ODF do a good job of separating content from presentation (style). The problem is that the presentation - layout layers of both ODF and OOXML remains bound to specific applications producing it. While the content layers are entirely portable and can be exchanged without information loss, the presentation layers can not. Microsoft makes no bones about the application specific design and purpose of OOXML. It's stated right in the Ecma 376 charter that OOXML was designed to be compatible with MSOffice and the billions of binary documents in MSOffice specific binary formats. The situation however is much more confusing with ODF. ODF is often promoted as being application, platform and vendor independent. After five years of development though, the OASIS ODF TC has been unable to strip ODF of it's OpenOffice/StarOffice specific aspects. ODF 1.0 - ISO 26300 had three areas that were under specified; meaning these areas were described in syntax only, and lacked the full semantics demanded by interoperable implementations. Only OpenOffice and StarOffice code base applications are able to exchange documents with an acceptable fidelity. The three under specified areas of ODF are: Lists (numbered), F
Gary Edwards

ConsortiumInfo.org - ODF vs. OOXML: War of the Words Chapter 5 - 0 views

  • Unlike screw threads, which are easily implemented with complete fidelity, it is sometimes only feasible to create a standard for software that, in a given case, at best will enable two products to become close to interoperable.&nbsp; After that, tinkering and testing is necessary to accomplish the final "fit."&nbsp; Similarly, the costs to innovation in achieving true "plug and play" interoperability when that result is feasible may be unacceptably high, leading to a decision to create a standard that (like ODF) only locks in a very significant amount of functionality, rather than complete uniformity (as OOXML strives to achieve).
    • Gary Edwards
       
      This is an odd way of stating the interop problem between ODF and the billions of legacy MSOffice documents? "The costs to innovation in achieving true plug and play interoperability (high fidelity conversion?) when that result is feasible may be unacceptably high......"
      OOXML was designed for the high fidelity conversion of those billions of legacy MSOffice documents. ODF was not.
      What's interesting here is that Andy is correctly pointing out that the ODF vednors refuse to compromise on the innovative ways OpenOffice differs from MSOffice. The innovations involve the different ways OpenOffice implements basic docuemnt structures such as lists, sections, fields, tables and page dynamics. MSOffic euses an older method of implementation.
      When converting legacy MSOffice documents to ODF, the fidelity breaks down wherever these strucutral features are present. The key point here is that these strucutral differentials are exactly related to how OpenOffice and MSOffice differ in their implementation methods. It's an application difference beign expressed at the file format level!!!!!!!!!!!
      The ODF vendors refuse to compromise with their application level innovations. The result of this is that billions of MSOffice docuemnts cannot be converted to ODF without significant loss of information.
      Which is to say: both ODF and OOXML are application specific formats. Worse, neither ODF or OOXML specify the syntax and semantics of layout!!! They only specify the syntax. Developers must study OpenOffice and MSDOffice to figure out how presentation (layout) is achieved.
      This stands in stark contrast to the W3C's Compound Document Format (CDF). CDF provides a very generic, application independent separation of content (XHTML) and presentation (CSS), where the presentation layer is entirely specified. CSS is highly portable because it is completely specified and totally application indepen
  •  
    The First Law of the Interent is that of interoperability. Interop ALWAYS comes first.
    Interop trumps innovation!!!
    This is why the Interent changes everything. Innovation takes place within the bounds of ineroperabiltiy. Vendors of course rely on innovation as the primary means of market differentiation. They would of course champion innovative features. Interop on the other hand is a leveling force.
Gary Edwards

The Harmonization Myth: ISO Approval of Open XML Will Hurt Interoperability - 0 views

  • This myth is rather silly if you think about it. Here is why… When people talk about interoperability and Open XML they do so primarily in the context of ODF. The story goes something like this: 1. Open XML is not interoperable with ODF 2. Open XML should be interoperable with ODF because ODF is already an ISO standard! 3. Hence: Open XML is no good, because it is not interoperable with ODF and therefore Open XML should not be an ISO standard!!!
    • Gary Edwards
       
      Forget ISO approval of OOXML. I would rather see ISO enforce the current directive that ODF be brought into compliance with existing ISO Interoperability requirements. Then and only then should ISO then consider OOXML.
      The reason for this approach? If ODF wiere compliant with existing ISO Interop Requirements, there would probably be some hope of harmonizing ODF and OOXML. Until ODF is stripped of it's application specific settings, and fully documented, we can hardly beging the process of figuring out harmonization.
      ODF 1.0 has four gapping holes that must be tended to before ISO proceeds any furhter with either ODF or OOXML. The holes are that ODF numbered lists, formulas and the presentation layer (styles) are woefully underspecified. The fourth problem is that ODF is seriously lacking an interoperability framework.
      These ODF problems can of course be traced back to the fact that ODF is application specific and bound to the "semantics and capabilities" of OpenOffice. That creates all kinds of problems. OOXML on the other hand is even worse. OOXML is application, platform and vendor specific!!!! If ODF were brought up to snuff, we could reasonably start work on harmonization. Thereby eliminating the need to standardize two file formats for the same purposes. Until ODF is fixed, what's the world to do?
      ~ge~
Gary Edwards

Brian Jones: Open XML Formats : Open XML support in older versions of Office - 0 views

  • The big thing I'm waiting for is other applications like OpenOffice to support custom defined schemas. This would mean that rather than simply sharing wordprocessing or spreadsheet information, we can share actual customer information. For example you could take health care data, or invoice data, or RFP data from one of the applications and move it over to the other without losing that semantic meaning. It would be like that demo many of you have seen me do where I take data from a table in Excel and move it over to Word where it's formatted more like a catalog.
  •  
    The big thing I'm waiting for is other applications like OpenOffice to support custom defined schemas. This would mean that rather than simply sharing wordprocessing or spreadsheet information, we can share actual customer information. For example you c
  •  
    The big thing I'm waiting for is other applications like OpenOffice to support custom defined schemas. This would mean that rather than simply sharing wordprocessing or spreadsheet information, we can share actual customer information. For example you c
  •  
    The big thing I'm waiting for is other applications like OpenOffice to support custom defined schemas. This would mean that rather than simply sharing wordprocessing or spreadsheet information, we can share actual customer information. For example you c
Gary Edwards

Thinking XML: Schema annotation for bottom-up semantic transparency - 0 views

  •  
    Schematron, Data Dictionaries, Schema Abstracts:::: Oche makes the case for WordNet style unique definitions that might be very useful to verticle industry schemas - defined "shared" business processes. also pertains to verticle implementation of ODF. P
  •  
    Schematron, Data Dictionaries, Schema Abstracts:::: Oche makes the case for WordNet style unique definitions that might be very useful to verticle industry schemas - defined "shared" business processes. also pertains to verticle implementation of ODF. P
  •  
    Schematron, Data Dictionaries, Schema Abstracts:::: Oche makes the case for WordNet style unique definitions that might be very useful to verticle industry schemas - defined "shared" business processes. also pertains to verticle implementation of ODF. P
Paul Merrell

untitled - 0 views

  • Most (quality) specifications provide clear instructions using those magic words SHALL, SHALL NOT, and MAY where those words have a defined meaning for an implementor. Paragraphs are clearly identified as either normative or informative. That way an implementor knows what they must and may implement to claim conformance against a specification. This approach has been well established over time as a sensible way for spec writers and implementors to work
  • Most (quality) specifications provide clear instructions using those magic words SHALL, SHALL NOT, and MAY where those words have a defined meaning for an implementor. Paragraphs are clearly identified as either normative or informative. That way an implementor knows what they must and may implement to claim conformance against a specification. This approach has been well established over time as a sensible way for spec writers and implementors to work That is the way quality specifications are written. For example, ISO/IEC's JTC 1 Directives (link to PDF) requires that international standards designed for interoperability "specify clearly and unambiguously the conformity requirements that are essential to achieve the interoperability." With that clarity, conformance is testable and can provide confidence of interoperability. A suite of tests may be developed and applied to an implementation to determine which tests pass, which fail, and hence arrive at an objective pronouncement on conformance of an implementation against the entirety of the specification.
  • In a quality specification, it should be feasible to select a normative paragraph, identify a conformance test for it, and make a clear statement that this test proves that an implementation meets (or fails to meet) that requirement. Call it a test plan: define the tests (test specification), define the expected set of results, and define what constitutes a "pass" of each test that establishes conformance. The plan then provides the matrix of test spec against requirement. Simple.
  • ...4 more annotations...
  • Rob Weir of IBM chaired (apology for the misuse of that last word) the formation list and then simply announced what the charter would be rather than seeking consensus among the list participants. As part of this process before that charter was produced and while I still naively believed that consensus was a goal, I sat down with ODF 1.1 and did a paragraph-by-paragraph review for testability. The numbers were quite revealing. I completely reviewed only the first four major sections and found very few clear requirements. The majority were mere statements with no normative language used to identify what was required or optional. Implementors would have to make their own interpretation.
  • It's ironic that the chair viewed as good news the fact that there were far fewer testable paragraphs than he had predicted. But his prediction of 10,000 test cases is probably far closer to how many testable paragraphs there should be; my counts were actually bad news.
  • All of the above leads to the interesting question of just how the chair expects to accomplish much that is useful in regard to ODF conformance testing before the specification is amended to tighten up the language and add clear requirements. The syntax conformity is already handled by validation against the schema, but the semantics are woefully under-specified.
  • Summary: ODF 1.1 isn't verifiable as a specification. From a fairly cursory review of the latest draft, ODF 1.2 will follow the same path. With OASIS now being more demanding regarding conformance requirements on every specification and with ISO/IEC taking a closer interest in liaison with the ODF TC, I find it hard to see how the ODF TC co-chairs can maintain this view toward verification.
Alex Brown

Doug Mahugh : Tracked Changes - 0 views

  • Much was made during the IS29500 standards process of the difference in the size of the ODF and Open XML specifications.&nbsp; This is a good example of where that difference comes from: in this case, a concept glossed over in three vague sentences of the ODF spec gets 17 pages of documentation in the Open XML spec.
    • Alex Brown
       
      This is the nub; OOXML may be overweight, but ODF is severely undernourished as a spec.
  •  
    Alex, I know from your previous writings that you do not regard OOXML as completely specified. But your post might be so misinterpreted. In my view, neither ODF nor OOXML has yet reached the threshold of eligibility as an international standard, completely specifying "clearly and unambiguously the conformity requirements that are essential to achieve the interoperability." ISO/IEC JTC 1 Directives, Annex I. . OOXML is ahead of ODF in some aspects of specificity, but the eligibility finish line remains beyond the horizon for both.
  • ...2 more comments...
  •  
    Paul, that's right - though so far the faulty things in OOXML turn out to be more round the edges as opposed to ODF's central lapses. Still, it's early days in the examination of OOXML so I'm reserving making any firm call on the comparative merits of the specs until I have read a lot (a lot) more. Is there an area of OOXML you'd say was particularly underbaked? I'm quite interested in the fact that neither of these beasts specify scripting languages ...
  •  
    Hi, Alex, Most seriously, there are no profiles and accompanying requirements to enable less featureful apps to round trip documents with more featureful apps, a la W3C Compound Document by Reference Framework. That's an enormous barrier to market entry and interoperability. That defect reacts synergistically with the dearth of semantic conformity requirements, with the incredible number of options including those 500+ identified extension points, and with a compatibility framework for extensions that while a good start leaves implementers far too much discretion in assigning and processing compatibility attributes. There are also major harmonization issues with other standards that get in the way of transformations, where Microsoft originally rolled its own rather than embracing existing open standards. I think it not insignificant that OOXML as a whole is available only under a RAND-Z pledge rather than being available for the entire world. The patent claims need to be identified and worked around or a different rights scheme needs Microsoft management's promulgation. This is a legal interoperability issue as opposed to technical, but an interoperability barrier nonetheless, an "unnecessary obstacle to international trade" in the sense of the Agreement on Technical Barriers to Trade. And absent a change by Microsoft in its rights regime, the work-arounds are technical. This is not to suggest that ODF lacks problems in regard to the way it implements standards incorporated by reference. The creation of unique OASIS namespaces rather than doing the needed harmonizing work with the relevant W3C WGs is a large ODF tumor in need of removal and reconstructive surgery. I'm not sure what is happening with the W3C consultation in that regard. I worked a good part of the time over several months comparing ODF and Ecma 376, evaluating their comparative suitability as document exchange formats. I gave up when it climbed well past 100 pages in length because the de
  •  
    1. Full-featured editors available that are capable of not generating application-specific extensions to the formats? 2. Interoperability of conforming implementations mandatory? 3. Interoperability between different IT systems either demonstrable or demonstrated? 4. Profiles developed and required for interoperability? 5. Methodology specified for interoperability between less and more featureful applications? 6. Specifies conformity requirements essential to achieve interoperability? 7. Interoperability conformity assessment procedures formally established and validated? 8. Document validation procedures validated? 9. Specifies an interoperability framework? 10. Application-specific extensions classified as non-conformant? 11. Preservation of metadata necessary to achieve interoperability mandatory? 12. XML namespaces for incorporated standards properly implemented? (ODF-only failure because Microsoft didn't incorporate any relevant standards.) 13. Optional feature interop breakpoints eliminated? 14. Scripting language fully specified for embedded scripts? 15. Hooks fully specified for use by embedded scripts? 16. Standard is vendor- and application-neutral? 17. Market requirement -- Capable of converging desktop, server, Web, and mobile device editors and viewers? (OOXML better equipped here, but its patent barrier blocks.)
  •  
    Didn't notice that my post before last was chopped at the end until after I had posted the list. Then Diigo stopped responding for a few minutes. Anyway, the list is short summation of my research on the comparative suitabilities of ODF 1.1 and Ecma 376 as document exchange formats, winnowed to the defects they have in common except as noted. The research was never completed because in the political climate of the time, the world wasn't ready to act on the defects. The criteria applied were as objective as I could make them; they were derived from competition law, JTC 1 Directives, and market requirements. I think the list is as good today in regard to IS 29500 as it was then to Ecma 376, although I have not taken an equally deep dive into 29500. You might find the list useful, albeit there is more than a bit of redundancy in it.
Gary Edwards

Microsoft pushes Trade Secrets Bill - 1 views

  • A spokesman for the Microsoft On The Issues website has expressed the company’s support for new legislation that would reform the legal framework for companies wishing to protect their trade secrets in a cloud-centric world where such information is frequently forced to reside on networks. In the post Microsoft’s Assistant General Counsel of IP Policy &amp; Strategy Jule Sigall rallies behind business and academic concerns supporting the proposed Defend Trade Secrets Act 2015 (DTSA), which goes before the United States Senate Judiciary Committee today. Sigall, who is also Associate General Counsel for Copyright in Microsoft’s Legal &amp; Corporate Affairs department, makes an ardent case for reform of the current legislation, as furnished by the Uniform Trade Secrets Act (UTSA). UTSA’s provisions are argued to be fractured, and rendered ineffective both by the inability of plaintiffs to pursue suits in federal courts (despite trade secret infractions being Federal by nature), and by the fact that not all states have adopted or instituted all the measures provided by the legislation. Additionally the limited provision for redress in international cases of trade secret theft are to be addressed.
  • Sigall presents the case of Microsoft’s Cortana AI as an example of why new legislation is necessary: ‘[Behind] Cortana sits a vast amount of technology developed or enhanced in-house by Microsoft – voice recognition; language translation; reactive and predictive algorithms that can synthesize context, location and data, and interface with the vast resources of the Bing search engine index; and a complex array of cloud servers to crunch and serve data in real time. This technology represents tens of thousands of hours of research, trial and error, and continued improvement as Cortana is adapted for new devices and new scenarios’
  • Sigall argues that better protection procedures for trade secrets, the only form of IP which currently lacks comprehensive cover in law, is essential for start-ups whose ideas, business plans and even customer lists may constitute the only marketable value of a company that is just in the stage of consolidating. ‘A trade secret is unique among forms of intellectual property in how it is legally protected. While it is a federal crime to steal a trade secret, a business that has its trade secrets stolen must rely on state law to pursue a civil remedy. Owners of copyrights, patents, and trademarks can go to federal court to protect their property and seek damages when their property has been infringed, but trade secret owners do not have access to such a federal remedy.’
  • ...7 more annotations...
  • Defend Trade Secrets Act 2015 contains [PDF] significant material from its doomed predecessor of 12 months ago, and one of its boldest initiatives is the extension of ex parte seizures, instituted in UTSA in a more limited form (particularly in the 1985 amendment to the Uniform Law Commission’s 1979 initial legislation). An ex parte seizure provides a kind of restraining order or injunction on disputed information, or even the dissemination of knowledge about whether the information is disputed, and places it under federal protection on the plaintiff’s behalf.
  • Microsoft had a hard time adjusting to the open source revolution, particularly in regard to the PC/Mac Office product which at one time represented the most successful and ubiquitous software in the world, and the many legal and semantic wrangles over the closed-source nature of Office formats such as Word led ultimately to a hybridised open source .docx format which is still argued to not be the OpenXML that was promised.
  • According to Sigall the state-by-state system currently in place was ‘simply not built with the digital world in mind’, and calls for ‘A uniform, national standard for protection’ which does not stop at state lines or even national borders.
  • In practical terms this seems likely to extend the circumstances under which information about leaks, hacks or thefts of information can be made the subject of gag orders for legal reasons, since it brings trade secrets into the same legal framework as other forms of intellectual property which enjoy more comprehensive coverage and recourse in law. The bill would also extend the purview of the 1996 Economic Espionage Act to take in a more rigorously conceived concept of ‘trade secrets’.
  • Even with the issues clear, the risk of disproportionate or over-reaching response in the event of the new bill passing successfully through congress in 2016 (it is unlikely to pass this year) is clear enough that the lack of network discussion about it is quite surprising. Essentially DTSA represents the same kind of proposed ‘judicial fast track’ – though in favour of corporations instead of governments – that has outraged so many commenters in the wake of the November 13th Paris attacks.
  • Silence in court Amongst its more quotidian clauses, the Defend Trade Secrets Act 2015 effectively offers corporate plaintiffs increased opportunity to federalise disputed private material in cases involving trade secrets, with all the penalties for infraction associated with that change of status – and far greater scope for sub judice orders likely to contain and conceal future breaches of information.
  • Eric Goldman of the Santa Clara University School of Law has just published a paper outlining the risks of extending ex parte seizures in the manner that DTSA 2015 proposes. Goldman writes that ‘the Seizure Provision does not solve many, if any, problems. In light of the remedies already available to trade secret owners in ex parte temporary restraining orders (TROs), the Seizure Provision purports to apply to only a narrow set of additional circumstances. In exchange for that modest benefit, the Seizure Provision creates the risk of anti-competitive seizures and seizures that cause substantial collateral damage to innocent third parties. To discourage such abuses, the Act imposes procedural safeguards and creates a cause of action for wrongful seizures. Unfortunately, those safeguards are miscalibrated to achieve the desired protections against abusive seizures.’
  •  
    Lots of possible Constitutional issues lurking. The Constitution creates only two types of intellectual property, patents and copyrights. "(P)roperty interests . . . are not created by the Constitution. Rather, they are created and their dimensions are defined by existing rules or understandings that stem from an independent source such as state law." Ruckelshaus v. Monsanto Co., 467 US 986 (1984), https://goo.gl/ZljO1H (trade secrets case). The traditional source of rights in trade secrets have been state law. Thus there is a state's rights issue lurking in this legislation, a question whether the federal government is invading the States' police power, an "our federalism" question.
1 - 19 of 19
Showing 20 items per page