Skip to main content

Home/ Document Wars/ Group items tagged linux

Rss Feed Group items tagged

Gary Edwards

Munich reverses course, may ditch Linux for Microsoft | Network World - 0 views

  • Reiter has also criticized the city’s open-source initiatives since his election, saying that the technology sometimes lags behind that of Microsoft, and that compatibility issues can cause issues.
  • The news comes just eight months after Munich’s city council essentially declared victory, saying that the LiMux transition was complete and boasting of more than $15.6 million saved since the project began. Nearly 15,000 users were converted to the city’s customized Linux-based operating system.
  •  
    "The German city of Munich, long one of the open-source community's poster children for the institutional adoption of Linux, is close to performing a major about-face and returning to Microsoft products. Featured Resource Presented by Riverbed Technology 10 Common Problems APM Helps You Solve Practical advice for you to take full advantage of the benefits of APM and keep your IT environment Learn More Munich's deputy mayor, Josef Schmid, told the Süddeutsche Zeitung that user complaints had prompted a reconsideration of the city's end-user software, which has been progressively converted from Microsoft to a custom Linux distribution - "LiMux" - in a process that dates back to 2003."
Gary Edwards

Novell CEO confirms that Microsoft is a reality | The Register - 0 views

  • It was a performance that saw Hovsepian call Microsoft a reality the community must work with
  • Skimming over the details of Microsoft's support, Hovespian said such deals are critical if Linux is going to give customers running mixed environments what they need, by delivering interoperability in the data center and on the desktop.
    • Gary Edwards
       
      No Kidding! The marketplace knows this full well. It's the FOSS and ODF Communities that are clueless. Interoperability with Microsoft bound documents, applications and processes must be dealt with before Linux and ODF systems can begin to penetrate the growing Microsoft Stack. This is why the ODF iX proposals, five of which were submitted to the ODF TC for discussion in the past year alone, were critical to the success of ODF in California, Massachusetts, Denmark, Belgium and the EU-IDABC. Too bad the ODF TC doesn't understand this importance and the need to accomodate the marketplace.
Gary Edwards

Commercializing Interoperability -- Linux leaders plot hapless counterattack on Microsoft - 0 views

  • Is Microsoft commercializing "interoperability"? Is interoperability through privileged access to the interop API's now a strategic asset to be traded with partners in crime?
  •  
    The first post in the ZDNet series discussing the many deals Microsoft is cutting with prominent LiNUX vendors.  My point is that interoperability plays a prominent role in each of these deals, and, the deals also involve partners supporting Microsoft directed interop between OpenOfficeXML and OpenDocument.  Coincidence? 

    I think not!

Gary Edwards

Singing Kumbaya -- Linux leaders plot hapless counterattack on Microsoft - 0 views

  • have you noticed that IBM is softening their position on "harmonization"? There are a number of events to consider that might have influenced this change in tone:
  •  
    More in that same "LiNUX Leaders plot counter attack on Microsoft" thread at ZDNet.  This time the issue is what has caused IBM to sing a differnet tune?  The tune known as "harmoniation".
Gary Edwards

But can they implement ODF? South African Government Adopts ODF (and not OOXML) - 0 views

  • That said, it goes on to acknowledge that “there are standards which we are obliged to adopt for pragmatic reasons which do not necessarily fully conform to being open in all respects.
  •  
    So, South Africa was watching closely the failed effort in Massachusetts to implement ODF?  And now they are determined to make it work? Good thing they left themselves a "pragmatic" out; "there are standards which we are obliged to adopt for pragmatic reasons which do not necessarily fully conform to being open in all respects."

    Massachusetts spent a full year on an ODF implementation Pilot Study only to come to the inescapable conclusion that they couldn't implement ODF without a high fidelity "round trip" capable ODF plug-in for MSOffice.  In May of 2006, Pilot Study in hand, Massachusetts issued their now infamous RFi, "the Request for Information" concerning the feasibility of an ODF plug-in clone of the MS-OOXML Compatibility Pack plug-in for MSOffice applications. At the time there was much gnashing of teeth and grinding of knuckles in the ODf Community, but the facts were clear. The lead dog hauling the ODf legislative mandate sleigh could not make it without ODf interoperability with MSOffice. Meaning, the rip out and replace of MSOffice was no longer an option. For Massachusetts to successfully implement ODf, there had to be a high level of ODf compatibility with existing MS documents, and ODf application interoperability with existing MS applications. Although ODf was not designed to meet these requirements, the challenge could not have been any more clear. Changes in ODf would have to be made. So what happened?

    Over a year later,
Gary Edwards

ODF vs. OOXML: War of the Words | Andrew Updegrove: Tales of Adversego - 0 views

  •  
    "For some time I've been considering writing a book about what has become a standards war of truly epic proportions.  I refer, of course, to the ongoing, ever expanding, still escalating conflict between ODF and OOXML, a battle that is playing out across five continents and in both the halls of government and the marketplace alike.  And, needless to say, at countless blogs and news sites all the Web over as well. Arrayed on one side or the other, either in the forefront of battle or behind the scenes, are most of the major IT vendors of our time.  And at the center of the conflict is Microsoft, the most successful software vendor of all time, faced with the first significant challenge ever to one of its core businesses and profit centers - its flagship Office productivity suite. The story has other notable features as well:  ODF is the first IT standard to be taken up as a popular cause, and also represents the first "cross over" standards issue that has attracted the broad support of the open source community.  Then there are the societal dimensions: open formats are needed to safeguard our culture and our history from oblivion.  And when implemented in open source software and deployed on Linux-based systems (not to mention One Laptop Per Child computers), the benefits and opportunities of IT become more available to those throughout the third world. There is little question, I think, that regardless of where and how this saga ends, it will be studied in business schools and by economists for decades to come.  What they will conclude will depend in part upon the materials we leave behind for them to examine.  That's one of the reasons I'm launching this effort now, as a publicly posted eBook in progress, rather than waiting until some indefinite point in the future when the memories of the players in this drama have become colored by the passage of time and the influence of later events. My hope is that those of you who have played or are n
Gary Edwards

Microsoft's 'Men in Black' kill Florida open standards legislation - 0 views

  • Rep. Homan and his son Doug tried to add their little open standards boost to SB 1974 as quietly as possible. They wanted the modified bill to at least get through its first committee approval before anyone spotted what they had done. But Microsoft's Florida lobbyists were on the ball and spotted it almost immediately. "It was like the movie 'Men in Black,'" says Rep. Homan. "Three Microsoft lobbyists, all wearing black suits." Another lobbyist (unaffiliated with Microsoft) who would speak only "on background" laughed at the "Men in Black" description. "I know those guys," he said. "They even wear sunglasses like in that movie. They are the 'Men in Black' of Florida lobbying, for sure." A legislative staff employee who would lose his job if he were quoted here by name said, "By the time those lobbyists were done talking, it sounded like ODF (Open Document Format, the free and open format used by OpenOffice.org and other free software) was proprietary and the Microsoft format was the open and free one." Two other legislative employees (who must also remain anonymous) told Linux.com that the Microsoft lobbyists implied that elected representatives who voted against Microsoft's interests might have a little more trouble raising campaign funds than they would if they helped the IT giant achieve its Florida goals.
  •  
    It seems Microsoft has blocked another attempt by concerned legilators to mandate open file formats for governemnt information.  Good read with some great quotes.  The legislation passage itself is extremely well written.
Gary Edwards

Vista Aiding Linux Desktop, Strategist Says - 0 views

  • Crawford said a corporate desktop needs to be focused on the business user, compliant with company standards, interoperable, secure, and able to be shipped with an enterprise kernel and managed remotely, and to have standard applications installed. "The Linux desktop can do all of that. It can be interoperable with earlier versions of the operating system, is generally interoperable with Windows, can ship with an enterprise kernel and can be remotely managed by existing management solutions," he said.
Gary Edwards

Microsoft Support for ODF - the Q&A - 0 views

  • Hi Gary,I am a technology journalist with Asia's ONLY Linux-focused magazine, LINUX For You. I am working on a story revolving the recent development of Microsoft supporting ODF Format. I want to understand the equation of the whole development, would you please help me understand: Q1. What do you think drove Microsoft to support the ODF format?
  •  
    This is the full response to Swapnil's seven questions.  It's long.  But we hold back nothing!  Thanks again to Marbux.  He is a peach!
Gary Edwards

Ballmer threatens Linux and open source with patents again - Flock - 0 views

  • To handle IP conflicts between open source and proprietary software organizations, Ballmer wants to see what he calls "an intellectual property interoperability framework between the two worlds." He did not give any specifics on what such a framework would look like.
  •  
    You've got to be kidding me!  Balmer wants to establish "an intellectual property interoperability framework" that open source communities would honor?  I think that's called "open standards" implemented according to the ISO, W3C and International Trade Agreement Interoperability conformance requirements.

    Why doesn't Microsoft start with an honest effort to comply with where the rest of the world has long been?

    ~ge~

Gary Edwards

Linux Foundation Legal : Behind Putting the OpenDocument Foundation to Bed (without its... - 0 views

  • CDF is one of the very many useful projects that W3C has been laboring on, but not one that you would have been likely to have heard much about. Until recently, that is, when Gary Edwards, Sam Hiser and Marbux, the management (and perhaps sole remaining members) of the OpenDocument Foundation decided that CDF was the answer to all of the problems that ODF was designed to address. This announcement gave rise to a flurry of press attention that Sam Hiser has collected here. As others (such as Rob Weir) have already documented, these articles gave the OpenDocument Foundation’s position far more attention than it deserved. The most astonishing piece was written by ZDNet’s Mary Jo Foley. Early on in her article she stated that, “the ODF camp might unravel before Microsoft’s rival Office Open XML (OOXML) comes up for final international standardization vote early next year.” All because Gary, Sam and Marbux have decided that ODF does not meet their needs. Astonishing indeed, given that there is no available evidence to support such a prediction.
  •  
    Uh?  The ODF failure in Massachusetts doesn't count as evidence that ODF was not designed to be compatible with existing MS documents or interoperable with existing MSOffice applications?

    And it's not just the da Vinci plug-in that failed to implement ODF in Massachusetts!  Nine months later Sun delivered their ODF plug-in for MSOffice to Massachusetts.  The next day, Massachusetts threw in the towel, officially recognizing MS-OOXML (and the MS-OOXML Compatibility Pack plug-in) as a standard format for the future.

    Worse, the Massachusetts recognition of MS-OOXML came just weeks before the September 2nd ISO vote on MS-OOXML.  Why not wait a few more weeks?  After all, Massachusetts had conducted a year long pilot study to implement ODF using ODF desktop office sutie alternatives to MSOffice.  Not only did the rip out and replace approach fail, but they were also unable to integrate OpenOffice ODF desktops into existing MSOffice bound workgroups.

    The year long pilot study was followed by another year long effort trying to implement ODF using the plug-in approach.  That too failed with Sun's ODF plug-in the final candidate to prove the difficulty of implementing ODF in situations where MSOffice workgroups dominate.

    California and the EU-IDABC were closely watching the events in Massachusetts, as was most every CIO in government and private enterprise.  Reasoning that if Massachusetts was unable to implement ODF, California CIO's totally refused IBM and Sun's effort to get a pilot study underway.

    Across the pond, in the aftermath of Massachusetts CIO Louis Guiterrez resignation on October 4th, 2006, the EU-IDABC set about developing their own file format, ODEF.  The Open Document Exchange Format splashed into the public discussion on February 28th, 2007 at the "Open Document Exchange Workshop" held in Berlin, Germany.

    Meanwhile, the Sun ODF plug-in is fl
Gary Edwards

Open Document Foundation Gives Up | Linux Magazine - 0 views

  • The reasons for the move to CDF was improved compatibility with Microsoft’s OOXML format the foundation claimed at the time. Cris Lilley from W3C contradicted. CDF is not an office format, and thus not an alternative to the Open Document Format. This turn-down is likely the reason for the abrupt ditching of the foundation.
  •  
    I've got to give this one extra points for creativity!  All anyone has to do is visit the W3C web sites for CDF WICD Full 1.0 to realize that there is in fact a CDf profile for desktops.  CDF WICD Mobile is the profile for devices.

    My guess is that Chris Lilley is threading the needle here.  IBM, Groklaw, and the lawyer for OASIS have portrayed the Foundation's support for CDF WICD Full as a replacement for ODF - as in native file format for OpenOffice kind of replacement.  Mr. Lilley insists that CDF WiCD Full was not designed for that purpose.  It's for export only!  As in a conversion of native desktop file formats.

    Which is exactly what the da Vinci group was doing with MSOffice.  The Foundation's immediate interest in CDF WICD was based on the assumption that a similar conversion would be possible between OpenOffice ODF and CDF WICD.

    The Foundation's thinking was that if the da Vinci group could convert MSOffice documents and processes to CDF WICD Full, and, a similar conversion of OpenOffice ODF documents and processes to CDF WICD could be done, then near ALL desktop documents could be converted into a highly interoperable web platform ready format.

    Web platform ready documents from OpenOffice?  What's not to like?  And because the conversion between ODF and CDF WICD Full is so comparatively clean, OpenOffice would in effect, (don't go native file format now) become ahighly integrated rich client end user interface to advancing web platforms.

    The Foundation further reasoned that this conversion of OpenOffice ODF to CDF WICD Full would solve many of the extremely problematic interoperability problems that plague ODF.  Once the documents are in CDF WICD Full, they are cloud ready and portable at a level certain to diminish the effects of desktop applications specific feature sets and implementation models.

    In Massachusetts, the Foundation took
Gary Edwards

Barr: What's up at the OpenDocument Foundation? - Linux.com - 0 views

  • The OpenDocument Foundation, founded five years ago by Gary Edwards, Sam Hiser, and Paul "Buck" Martin (marbux) with the express purpose of representing the OpenDocument format in the "open standards process," has reversed course. It now supports the W3C's Compound Document Format instead of its namesake ODF. Yet why this change of course has occurred is something of a mystery.
  •  
    More bad information, accusations and smearing innuendo.  Wrong on the facts,  Emotionally spent on the conclussions.  But wow it's fun to see them with their panties in such a twist.

    The truth is that ODF is a far more "OPEN" standard than MS-OOXML could ever hope to be.  Sam's Open Standards arguments for the past five years remain as relevant today as when he first started makign them so many years ago.

    The thing is, the Open Standards requirements are quite different than the real world Implementation Requirements we tried to meet with ODF.

    The implementation requirements must deal with the reality of a world dominated by MSOffice.  The Open Standards arguments relate to a world as we wish it to be, but is not.

    It's been said by analyst advising real world CIO's that, "ODF is a fine open standards format for an alternative universe where MSOffice doesn't exist".

    If you live in that alternative universe, then ODF is the way to go.  Just download OpenOffice 2.3, and away you go.  Implementation is that easy.

    If however you live in this universe, and must deal with the impossibly difficult problem of converting existing MSOffice documents, applications and processes to ODF, then you're screwed. 

    All the grand Open Standards arguments Sam has made over the years will not change the facts of real world implmentation difficulities.

    The truth is that ODF was not designed to meet the real world implmentation requirements of compatibility with existing Microsoft documents (formats) and, interoperability with existing Microsoft Office applications.

    And then there are the problmes of ODF Interoperability with ODF applications.  At the base of this problem is the fact that compliance in ODF is optional.  ODF applications are allowed to routinely destroy metadata information needed (and placed into the markup) by other applications.<b
Gary Edwards

Novell: We Surrender - Forbes.com - 0 views

  •  
    Ouch! Daniel Lyons has Novell with one foot in the grave and Microsoft shoveling fast and furious. For sure Mr. Lyons is unaware of Novell puppet masters IBM and Oracle. Novell has been a dead man walking for years. What they have that's really valuab
  •  
    Ouch! Daniel Lyons has Novell with one foot in the grave and Microsoft shoveling fast and furious. For sure Mr. Lyons is unaware of Novell puppet masters IBM and Oracle. Novell has been a dead man walking for years. What they have that's really valuab
  •  
    Ouch! Daniel Lyons has Novell with one foot in the grave and Microsoft shoveling fast and furious. For sure Mr. Lyons is unaware of Novell puppet masters IBM and Oracle. Novell has been a dead man walking for years. What they have that's really valuab
Gary Edwards

Joint letter to the Open Source Community From Novell and Microsoft - 0 views

  •  
    This makes me sick. The indemnification nazis are driving a patent wedge right through the heart and soul of open source.
  •  
    This makes me sick. The indemnification nazis are driving a patent wedge right through the heart and soul of open source.
  •  
    This makes me sick. The indemnification nazis are driving a patent wedge right through the heart and soul of open source.
Gary Edwards

ODF Turns Five | Linux - 2 views

  •  
    ODF was created on the principles that interoperability and innovation were paramount, and that these are based on open standards. Not coincidentally, ODF's creation coincided with the growing support of open ICT architectures, which grew from the Web model where the standardization of HTML, an open, royalty-free standard, enabled the Web to be an open platform that enabled much innovation on top of it. The key was interoperability, or the ability of multiple parties to communicate electronically, without the need to all run the same application software or operating system. Also critical to the development of ODF was the introduction of OpenOffice.org, the open source office suite that first implemented the format, and the rise of XML as a widely supported foundational standard for describing structured data.
Gary Edwards

An interesting offer: get paid to contribute to Wikipedia - Rick Jelliffe - 1 views

  •  
    Classic argument about ODF vs OOXML.  Need to send Rick an explanation about how the da Vinci plug-in works.  It is entirely possible to capture everythign MSOffice editors do in ODF using namespace extensions compliant with ODF 1.1 standard.   What was impossible was to round-trip those MSOffice ODF documents to OpenOffice.org.  And as it turns out, replacing MSOffice/Windows on new workgroup desktops with OpenOffice/Linux was one of the primary objectives behind the Massachussetts effort to standardize on ODF.  They believed the hype that ODF was cross platform interoperable.  It wasn't then, and it still isn't five years later. As for capturing all the complexities and nuances of the very robust MSOffice productivity environment and authoring system?  Sure, ODf could easily be extended for that. What an incredible discussion!
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gary Edwards

The End of ODF & OpenXML - Hello ODEF! - 0 views

  •  
    Short slide deck of Barbara Held's February 28th, 2007 EU IDABC presentation. She introduces ODEF, the "Open Document Exchange Format" which is designed to replace both ODF and OpenOfficeXML. ComputerWorld recently ran a story about the end of ODF, as they covered the failure of six "legislative" initiatives designed to mandate ODF as the official file format. While the political treachery surrounding these initiatives is a story in and of itself, the larger story, the one that has world wide reverberations, wasn't mentioned. The larger ODF story is that ODF vendors are losing the political battles because they are unable to provide government CIO's with real world solutions. Here are three quotes from the California discussion that really say it all: "Interoperability isn't just a feature. It's the basic requirement for getting your XML file format and applications considered"..... "The challenge is that of migrating our existing documents and business processes to XML. The question is which XML? OpenDocument or OpenXML?" ....... "Under those conditions, is it even possible to implement OpenDocument?" ....... Bill Welty, CIO California Air Resource Board wondering if there was a way to support California legislative proposal AB-1668. This is hardly the first time the compatibility-interoperability issue has challenged ODf. Massachusetts spent a full year on a pilot study testing the top tier of ODF solutions: OpenOffice, StarOffice, Novell Office and IBM's WorkPlace (prototype). The results were a disaster for ODF. So much so that the 300 page pilot study report and accompanying comments wiki have never seen the light of day. In response to the disastrous pilot study, Massachusetts issued their now infamous RFi; a "request for information" about whether it's possible or not to write an ODF plugin for MSOffice applications. The OpenDocument Foundation responded to the RFi with our da Vinci plugin. The quick descriptio
Gary Edwards

But can money buy love? :: Another Microsoft Sponsored OOXML Study - 0 views

  •  
    Joe Wilson of Microsoft Watch knocks another one out of the park. Why is it that so few in the media get it? Or anyone else for that matter? Matt Assay gets it. But few understand the Vista Stack and the importance of OOXML in the transition of the monopoly base from MSOffice to the Vista Stack. No doubt the arrogance of those who dare challenge Microsoft is both a necessary blessing and guaranteed curse. Take for instance the widely held assumption that Microsoft invented MS-XML (OfficeOpenXML) in response to OpenDocument (ODf). This is false, misleading and will inevitably result in a FOSS death spiral in the face of a Vista Stack juggernaut. But it sure does feel good.

    Joe Wilson at Microsoft Watch points out the real reason for MS-XML, and why ISO approval of OOXML is so important. Microsoft needs OOXML approved as an international standard because OOXML is the binding model for the emerging Vista Stack of loosely coupled but information integrated applications.

    The Vista Stack model converges desktop, server, device and web information systems using OOXML-Smart Documents, .NET 3.0 and the XAML presentation layer as the binding components.

    The challenge for Microsoft is to migrate existing MSOffice bound business processes, line of business integrated apps, and advanced add-ons to the Exchange/SharePoint Hub. Once the existing documents, applications (MSOffice) and processes are migrated to the E/S Hub, they can be bound tightly to the rest of the Vista Stack.

    Others see OOXML as some sort of surrender or late recognition that the salad days of MSOffice are over. They jubilantly point to Web 2.0, Office 2.0 and rise of the LiNUX Desktop as having ushered in this end of monopoly for MSOffice. Like the ODf champions, these people are similarly sadly mistaken!

    While they celebrate, Microsoft is quie
1 - 20 of 44 Next › Last »
Showing 20 items per page