Skip to main content

Home/ Future of the Web/ Group items tagged xml

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

JSON 2 XML (XBEL) - 0 views

  •  
    [This online tool allows you to convert a JSON file into an XML file. This process is not 100% accurate in that XML uses different item types that do not have an equivalent JSON representation. The following rules will be applied during the conversion process: A default root element is created JSON array entries are converted to individual XML elements All JSON property values will be converted to #text item types Offending characters will be XML escaped]
  •  
    [This online tool allows you to convert a JSON file into an XML file. This process is not 100% accurate in that XML uses different item types that do not have an equivalent JSON representation. The following rules will be applied during the conversion process: A default root element is created JSON array entries are converted to individual XML elements All JSON property values will be converted to #text item types Offending characters will be XML escaped]
Paul Merrell

Learning from our Mistakes: The Failure of OpenID, AtomPub and XML on the Web - 1 views

  • At the turn of the last decade, XML could do no wrong. There was no problem that couldn’t be solved by applying XML to it and every technology was going to be replaced by it. XML was going to kill HTML. XML was going to kill CORBA, EJB and DCOM as we moved to web services. XML was a floor wax and a dessert topping. Unfortunately, after over a decade it is clear that XML has not and is unlikely to ever be the dominant way we create markup for consumption by browsers or how applications on the Web communicate. James Clark has XML vs the Web where he talks about this grim realization
Gary Edwards

ptsefton » OpenOffice.org is bad for the planet - 0 views

  •  
    ptsefton continues his rant that OpenOffice does not support the Open Web. He's been on this rant for so long, i'm wondering if he really thinks there's a chance the lords of ODF and the OpenOffice source code are listening? In this post he describes how useless it is to submit his findings and frustrations with OOo in a bug report. Pretty funny stuff even if you do end up joining the Michael Meeks trek along this trail of tears. Maybe there's another way?

    What would happen if pt moved from targeting the not so open OpenOffice, to target governments and enterprises trying to set future information system requirements?

    NY State is next up on this endless list. Most likely they will follow the lessons of exhaustive pilot studies conducted by Massachusetts, California, Belgium, Denmark and England, and end up mandating the use of both open standard "XML" formats, ODF and OOXML.

    The pilots concluded that there was a need for both XML formats; depending on the needs of different departments and workgroups. The pilot studies scream out a general rule of thumb; if your department has day-to-day business processes bound to MSOffice workgroups, then it makes sense to use MSOffice OOXML going forward. If there is no legacy MSOffice bound workgroup or workflow, it makes sense to move to OpenOffice ODF.

    One thing the pilots make clear is that it is prohibitively costly and disruptive to try to replace MSOffice bound workgroups.

    What NY State might consider is that the Web is going to be an important part of their informations systems future. What a surprise. Every pilot recognized and indeed, emphasized this fact. Yet, they fell short of the obvious conclusion; mandating that desktop applications provide native support for Open Web formats, protocols and interfaces!

    What's wrong with insisting that desktop applciations and office suites support the rapidly advancing HTML+ technologies as well as the applicat
Paul Merrell

Introducing the Open XML Format External File Converter for 2007 Microsoft Office Syste... - 0 views

  • In other words, revising the Open XML Format converter interfaces by adding new functionality does not require any recompilation of existing clients. This guarantees backward compatibility as these converter interfaces are upgraded.
    • Paul Merrell
       
      But what does it do for forward compatibility? OOXML is a moving interoperabillity target.
  • In addition to allowing converters to override external file formats, the applications allow converters to override OpenDocument Format-related formats (such as .odt). For example, if you specify a converter to be the default converter for .odt, Word 2007 SP2 invokes the specified converter whenever a user tries to open an .odt file from the Windows Shell instead of going through the native load path for Word 2007 SP2.
    • Paul Merrell
       
      How wonderful. Developers can bypass the forthcoming Microsoft native file support for ODF. Perhaps to convert Excel formulas to OpenForumla?
  • Open XML Format converters for Word 2007 SP2, Excel 2007 SP2, or PowerPoint 2007 SP2 are implemented as out-of-process COM servers. Out-of-process converters have the benefit of running in their own process space, which means issues or crashes within converters do not affect the application process space. In addition, out-of-process 32-bit converters can function on 64-bit operating systems in Microsoft Windows on Windows 64-bit (WoW64) mode without the need for converters to be compiled in 64-bit.
    • Paul Merrell
       
      Pretty lame excuses for not documenting the native file support APIs. I.e., the native file supoort APIs already throw "can't open file" error messages for problematic documents without crashing the app. The bit about not needing to recompile converters for 64-bit Windoze is a complete red herring. This is only a benefit if one requires conversion in an external process. It wouldn't be an issue if the native file support APIs were documented and their intermediate formats were the interop targets.
    • Paul Merrell
       
      I.e., one need not recompile the Office app if a supported native format is added. The OpenDocument Foundation and Sun plug-ins for MS Office proved that.
  • ...3 more annotations...
  • To begin developing a converter, you should familiarize yourself with the Open XML standard. For more information, see: Standard ECMA-376: Office Open XML File Formats.
    • Paul Merrell
       
      Note that they specify Ecma 376 rather than ISO/IEC:29500-2008 Office Open XML. So you get to rewrite your converters when Microsoft adds support for the official standard in the next major release of Office.
  • External files are imported into Word 2007 SP2, Excel 2007 SP2, or PowerPoint 2007 SP2 by converting the external file to Open XML Formats. External files are exported from Word 2007 SP2, Excel 2007 SP2, or PowerPoint by converting Open XML Formats to external files. The success of either the import or export conversion depends upon the accurate generation and interpretation of Open XML Formats by the converter.
    • Paul Merrell
       
      Note that this is a process external to the native file support APIs and their intermediate formats. The real APIs apparently will remain obfuscated. Thiis forces others to develop support for Ecma 376 rather than working directly with the native file support APIs. In other words, more incentives for others to target the moving target OOXML rather than the more stable intermediate formats.
  • Summary: Get the details about the interfaces that you need to use to create an Open XML Format External File Converter for the 2007 Microsoft Office system Service Pack 2 (SP2). (16 Printed Pages)
Paul Merrell

XML.Gov - Home Page - 0 views

shared by Paul Merrell on 02 Dec 08 - Cached
  • Extensible Markup Language (XML) embodies the potential to alleviate many of the interoperability problems associated with the sharing of documents and data. Realizing the potential requires cooperation not only within but also across organizations. Our purpose is to facilitate the efficient and effective use of XML through cooperative efforts among government agencies, including partnerships with commercial and industrial organizations.
  •  
    The U.S. Feds' home site for XML-related initiatives. Activities include coordination of federal actions to capture the power of structured data information systems in XML formats. Site includes federal strategic plan.
Gary Edwards

Developing a Universal Markup Solution For Web Content - 0 views

  •  
    KODAXIL To Replace XML?

    File this one under the Universal Interoperability label. Very interesting. Especially since XML document formats have proven to fall short on the two primary expectations of users: interoperability and Web ready. Like HTML+ :) Maybe KODAXIL will work?

    The recent Web 2.0 Conference was filled with new web services , portals and wiki efforts trying their best to mash data into document objects. iCloud, MindTouch, AppLogic, 3Tera, Caspio and Gazoodle all deserve attention. although each took a rather different approach towards solving the problem. MindTouch in particular was excellent.

    "A Montreal-based software and research development company has developed a markup solution and language-neutral asset-descriptor that when fully developed, could result in a universal computer language for representing information in databases, web and document contents and business objects."

    "While still at a seminal stage of development, the company Gnoesis, aims to address the problem of data fragmentation caused by semantic differences between developers and users from different linguistic backgrounds."

    Gnoesis, the company that has developed the language called KODAXIL (Knowledge, Object, Data, Action, and eXtensible Interoperable Language), a data and information representation language, says the new language will replace the XML function of consolidating semantically identical data streams from different languages, by creating a common language to do this.

    The extensible semantic markup associated with this language will be understood worldwide and is three times shorter than XML.
Paul Merrell

Google Open Sources Google XML Pages - O'Reilly News - 0 views

  • OSCON 2008, Gonsalves made the announcement that, after several years of consideration, Google was releasing Google XML Pages (or GXP) under the Apache Open Source License.
  • At OSCON 2008, Gonsalves made the announcement that, after several years of consideration, Google was releasing Google XML Pages (or GXP) under the Apache Open Source License.
  • Originally developed as a Python interpreter that produced Java source code, gxp was rewritten in 2006-7 to be a completely Java based application. The idea behind gxp is fairly simple (and is one that is used, in slightly different fashion, for Microsoft's XAML and Silverlight) - a web designer can declare a number of XML namespaces that define specific libraries on an XHTML or GXP container element, intermixing GXP and XHTML code in order to perform conditional logic, invoke server components, define state variables or create template modules. This GXP code is then parsed and used to generate the relevant Java code, which in turn is compiled into a server module invoked from within a Java servlet engine such as Tomcat or Jetty and cached on the server.
Paul Merrell

Will Language Overload Force Open Enterprises? - 0 views

  • "XML doesn't have the monopoly it used to have," Bray said. "It used to be that if you wanted to send messages back and forth across the wire, XML was the only game in town." "Clearly, there is going to more heterogeneity for wire formats, too," he added. "Increasingly, given that the future will have more than two programming languages, there will be a lot more messages being passed around."
  • Bray comes to the issue from a unique perspective. As a co-author of the original XML specifications, he helped create a language that at one point had been intended to become the lingua franca for all Web communications. He now admits that he wasn't entirely accurate in his original vision of where XML would end up.
  • "I was so completely wrong on everything," Bray admitted. "We thought we were going to replace HTML and that turned out to be a silly idea. It turned out be used for syndication feeds and purchase orders and a million other things, and that's fine. Things find their own level." However, in today's world of language proliferation, Bray has another favorite approach for exchanging information. "I am a strong partisan of the REST (define) approach, which provides a Web-based approach for integration of anything to anything," Bray said. "REST isn't tied to XML or JSON but it makes it easy to use either."
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Paul Merrell

XHTML Modularization 1.1 Released as W3C Recommendation - 0 views

  • XHTML Modularization is a decomposition of XHTML 1.0, and by reference HTML 4, into a collection of abstract modules that provide specific types of functionality.
  • The modularization of XHTML refers to the task of specifying well-defined sets of XHTML elements that can be combined and extended by document authors, document type architects, other XML standards specifications, and application and product designers to make it economically feasible for content developers to deliver content on a greater number and diversity of platforms. Over the last couple of years, many specialized markets have begun looking to HTML as a content language. There is a great movement toward using HTML across increasingly diverse computing platforms. Currently there is activity to move HTML onto mobile devices (hand held computers, portable phones, etc.), television devices (digital televisions, TV-based Web browsers, etc.), and appliances (fixed function devices). Each of these devices has different requirements and constraints.
  • XHTML Modularization is a decomposition of XHTML 1.0, and by reference HTML 4, into a collection of abstract modules that provide specific types of functionality. These abstract modules are implemented in this specification using the XML Schema and XML Document Type Definition languages. The rules for defining the abstract modules, and for implementing them using XML Schemas and XML DTDs, are also defined in this document. These modules may be combined with each other and with other modules to create XHTML subset and extension document types that qualify as members of the XHTML-family of document types.
  • ...1 more annotation...
  • Modularizing XHTML provides a means for product designers to specify which elements are supported by a device using standard building blocks and standard methods for specifying which building blocks are used. These modules serve as "points of conformance" for the content community. The content community can now target the installed base that supports a certain collection of modules, rather than worry about the installed base that supports this or that permutation of XHTML elements. The use of standards is critical for modularized XHTML to be successful on a large scale. It is not economically feasible for content developers to tailor content to each and every permutation of XHTML elements. By specifying a standard, either software processes can autonomously tailor content to a device, or the device can automatically load the software required to process a module. Modularization also allows for the extension of XHTML's layout and presentation capabilities, using the extensibility of XML, without breaking the XHTML standard. This development path provides a stable, useful, and implementable framework for content developers and publishers to manage the rapid pace of technological change on the Web.
Gary Edwards

The NeuroCommons Project: Open RDF Ontologies for Scientific Reseach - 0 views

  •  
    The NeuroCommons project seeks to make all scientific research materials - research articles, annotations, data, physical materials - as available and as useable as they can be. This is done by fostering practices that render information in a form that promotes uniform access by computational agents - sometimes called "interoperability". Semantic Web practices based on RDF will enable knowledge sources to combine meaningfully, semantically precise queries that span multiple information sources.

    Working with the Creative Commons group that sponsors "Neurocommons", Microsoft has developed and released an open source "ontology" add-on for Microsoft Word. The add-on makes use of MSOffice XML panel, Open XML formats, and proprietary "Smart Tags". Microsoft is also making the source code for both the Ontology Add-in for Office Word 2007 and the Creative Commons Add-in for Office Word 2007 tool available under the Open Source Initiative (OSI)-approved Microsoft Public License (Ms-PL) at http://ucsdbiolit.codeplex.com and http://ccaddin2007.codeplex.com,respectively.

    No doubt it will take some digging to figure out what is going on here. Microsoft WPF technologies include Smart Tags and LINQ. The Creative Commons "Neurocommons" ontology work is based on W3C RDF and SPARQL. How these opposing technologies interoperate with legacy MSOffice 2003 and 2007 desktops is an interesting question. One that may hold the answer to the larger problem of re-purposing MSOffice for the Open Web?

    We know Microsoft is re-purposing MSOffice for the MS Web. Perhaps this work with Creative Commons will help to open up the Microsoft desktop productivity environment to the Open Web? One can always hope :)

    Dr Dobbs has the Microsoft - Creative Commons announcement; Microsoft Releases Open Tools for Scientific Research ...... Joins Creative Commons in releasing the Ontology Add-in
Gary Edwards

SVG Is The Future Of Application Development | SitePoint » - 0 views

  •  
    I could see this coming a mile away, ana it's about time! ".... So if HTML can't deliver for us here, what will? Microsoft wants us to use Silverlight and Adobe wants us to use Flash and AIR, of course. And Apple…? Apple ostensibly wants us to use HTML5's canvas. Both Microsoft's and Adobe's contenders are proprietary, which seems to be reason enough for web developers to avoid them to a certain degree, and all of them muddy HTML, which is a dangerous thing for the semantic web. But Apple actually has a trick up its sleeve. Like Mozilla's been doing with Firefox, Apple has quietly been implementing better support for SVG, the W3C's Recommendation for XML-based vector graphics, into WebKit. SVG delivers the same kind of vector graphics capabilities that Flash does, but it does so using all the interoperability benefits that XML brings along for the ride. SVG is great for graphically displaying both text and images, manipulating them with declarative visual primitives, and it comes with a host of lickable effects. Ironically, SVG was originally jointly developed by both Adobe and Sun Microsystems but recently it's Sun Labs that has been doing interesting stuff with the technology. The most compelling experiment of this kind has to be Sun Labs's Lively Kernel project....."
Gonzalo San Gil, PhD.

Export - Support - WordPress.com (Backup) - 0 views

  •  
    "Export Your Content to Another Blog or Platform It's your content; you can do whatever you like with it. Go to Tools -> Export in your WordPress.com dashboard to download an XML file of your blog's content. This format, which we call WordPress eXtended RSS or WXR, will contain your posts, pages, comments, categories, and tags."
  •  
    "Export Your Content to Another Blog or Platform It's your content; you can do whatever you like with it. Go to Tools -> Export in your WordPress.com dashboard to download an XML file of your blog's content. This format, which we call WordPress eXtended RSS or WXR, will contain your posts, pages, comments, categories, and tags."
Paul Merrell

Xcerion's 'Icloud' Promises Marriage of Remote And Local Computing -- Xcerion -- Inform... - 0 views

  • Xcerion has continued to work toward the general release of its XML-based "Cloud OS," a service based on Xcerion XML Internet Operating System/3 (XIOS/3). The announcement of an official name for the service brings the company a step close to that goal; it also certainly reassures investors like Lou Perazzoli, one of the core architects of Microsoft (NSDQ: MSFT) Windows NT, and Terry Drayton, founder of HomeGrocer.com, that Xcerion's technology is almost ready for prime time.
  • Icloud relies on an XML virtual machine for local (and offline) operation. It thus combines the advantages of remote computing -- a central point for software distribution, storage, and updates -- with the advantages of local computing -- execution speed and user control without a bandwidth bottleneck.
  • Icloud offers an intriguing technology that Xcerion is calling "gesture-based computing." Jonas Thornholm, CFO of Xcerion, believes it may be the service's "killer app." Gesture-based computing is essentially real-time content sharing. It allows users to drag and drop documents from their computer to a friend's computer in real time, as if they two machines were dual monitors powered by a single machine.
  • ...1 more annotation...
  • Another point of differentiation between Icloud and other WebTop systems is the breadth of Xcerion's ambitions: It's aiming not just to move the desktop into the Internet "cloud" but also to reinvent the economics of software development. Icloud developers can look forward to an Internet-based marketplace for their Web applications that includes monetization technology. They will be able to offer free, ad-supported, or fee-based software with minimal hassle.
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Paul Merrell

Office Business Applications for Store Operations - 0 views

  • Service orientation addresses these challenges by centering on rapidly evolving XML and Web services standards that are revolutionizing how developers compose systems and integrate them over distributed networks. No longer are developers forced to make do with rigid and proprietary languages and object models that used to be the norm before service orientation came into play. The emergence of this new methodology is helping to develop new approaches specifically for Web-based distributed computing. This revolution is transforming the business by integrating disparate systems to establish a real-time enterprise. Making information available where it is needed to simplify merchandising processes requires a methodology that is based on loosely coupled integration between various in-store and back-end applications. This demand makes it critical for an architecture that is based on service orientation for integration between disparate applications. In addition, surfacing information at the right place requires the ability to compose dynamic applications using an array of underlying services. The Office Business Applications platform provides this ability to create composite applications, such as dashboards for the store, regional, and corporate managers.
  •  
    Summary: Changing market conditions require agility in business applications. Service orientation answers the challenge by centering on XML and Web services standards that revolutionize how developers compose systems and integrate them over distributed networks. Once integrated, how is the information presented to the decision makers? (36 printed pages)
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Paul Merrell

Alfresco Labs 3.0 Final Version Supports CMIS - 0 views

  • Alfresco Software Inc., today announced the general availability of Alfresco Labs 3 Final. This is a milestone release for Alfresco Labs and is immediately available for download under the open source GPL license at:       http://wiki.alfresco.com/wiki/Download_Labs "In the current economic environment organizations seek more cost effective and productive methods of managing increased volumes of content and greater levels of compliance. Alfresco delivers an innovative solution for ECM, while dramatically reducing the associated costs," said John Newton, CTO of Alfresco Software. "This release is designed to be the open source content services platform for all Alfresco and non-Alfresco content applications from document management and web content management to wikis. Alfresco has already utilized the emerging CMIS standard to integrate content services to other open source systems like Joomla, as well as offering integrations to MediaWiki, Open Office and WordPress. We strongly recommend that our open source community download this release."
  • Native SharePoint protocol support: Seamless document editing via SharePoint protocol Flex Document Previewer: Zoom, snap points and full-screen AJAX Calendar: Drag-and-drop event support Links Directory Manager: Share internal and external links Document Management Enhanced SharePoint protocol site and workspace support Email Management Email-In Smart Folders: Email storage with attachment support
  • CMIS REST and Web Services binding Content Management Interoperability Services (CMIS) support SharePoint Protocol Support Native SharePoint Protocol support from Microsoft Office and Alfresco Share
  • ...1 more annotation...
  • Alfresco has seen major adoption of its open source ECM system throughout the world. There have been over 1.5 million downloads of Alfresco Labs. Alfresco Labs is designed to be the research vehicle for new features, enabling developers to access a nightly build with the latest functionality. The Alfresco Labs 3 build is a stable build with basic QA against an open source stack. Alfresco Enterprise is the supported Alfresco build and is used by more than 700 enterprise customers, including the NYSE, Los Angeles Times, Boise Cascade, Sony Pictures, Activision, Kaplan, FedEx, and KLM.
  •  
    Virtually all of the big ECM players have joined the OASIS CMIS TC, but how many are there to collaborate and how many to obstruct? See . The Alfresco Labs FOSS CMIS and BPM hub seems to be gaining by leaps and bounds and now offers even more app interop connections including -- vitally -- with Sharepoint. CMIS is a standard we might keep an eye on.
Paul Merrell

International Digital Publishing Forum (formerly Open eBook Forum) - 0 views

shared by Paul Merrell on 29 May 08 - Cached
  • EPUB Support from list of Publishers An Open Letter from AAP to IDPF
  • What is EPUB, .epub, OPS/OCF & OEB? ".epub" is the file extension of an XML format for reflowable digital books and publications. ".epub" is composed of three open standards, the Open Publication Structure (OPS), Open Packaging Format (OPF) and Open Container Format (OCF), produced by the IDPF. "EPUB" allows publishers to produce and send a single digital publication file through distribution and offers consumers interoperability between software/hardware for unencrypted reflowable digital books and other publications. The Open eBook Publication Structure or "OEB", originally produced in 1999, is the precursor to OPS. For the latest on IDPF standards, sample files and companies who have implemented our specifications, please visit our public forums.  Getting started? Visit our FAQ's.
  •  
    Will ePub be the standard that converges the desktop, the server, devices, and the Web? ePub is an implementation of the W3C Compound Document Formats interoperability framework with excellent packaging, container, and markup components. ePub is also strongly integrated with Daisy XML for accessibility, "talking books," and document structure, hinting at a voice-interactive future for publishing. ePub has been developed as a vendor-neutral standard and is being implemented by a large number of major book publishers globally, a factor that should spur major development of both editing and rendering software and devices.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Gary Edwards

Sun pitches new cloud as 'Open Platform' * - 0 views

  •  
    Sun takes on the problem of interoperability and portability of applications in a world where there will be many many clouds. At the roll out of the Sun Cloud, key executives explain Sun's implementation of Open Cloud API's and what they see as a pressing need for management tools that will allow some standardization across clouds.

    Sun's Open Cloud API plan is a clean reuse of existing Open Web API's.

    "..... The underpinning of the Open Cloud Platform that Sun will be pitching to developers is a set of cloud APIs, the creation of which is focused under Project Kenai and which has been released under a Community Commons open source license. Sun wants lots of feedback on the APIs and wants these APIs to become a standard too, hence the open license. These APIs describes how virtual elements in a cloud are created, started, stopped, and hibernated using HTTP commands such as GET, PUT, and POST...."

    "...... The upshot is that these APIs will allow programmatic access to virtual infrastructure from Java, PHP, Python, and Ruby and that means system admins can script how virtual resources are deployed. The APIs, as co-creator Tim Bray explains in his blog, are written in JavaScript Object Notation (JSON), not XML. The Q-Layer software is a graphical representation of what is going on down in the APIs, and you can moving virtual resources into the cloud with a click of a mouse using the dashboard or programmatically using the APIs from those four programming languages listed above. (PHP support is not yet available, but will be)....."
  •  
    I can see why Sun picked those four languages first. Can I assume that with a bit of work, this API will be usable from any language with a C "foreign function interface", such as Perl, Common Lisp, Bourne shell, Squeak Smalltalk, and others that your server application might be written in?
  •  
    I read this comment that largely answers my question at: http://www.tbray.org/ongoing/When/200x/2009/03/16/Sun-Cloud "So right now JSON out of a shell tool is not so good. More things like this will create pressure for development of tools to change that, but years of widespread XML/HTML deployment have only produced a few oddly maintained tools. Perhaps that's because you can scrape quite a bit of the web with a couple sed passes, and if I were to have to deal with the mentioned tools, that's probably the route I'd take." (seth w. klein) In other words, with a bit of work, _anything_ that can talk text over HTTP can do this with a bit of work, but an object-oriented is likely to be more at home with JSON (JavaScript Object Notation)
Gary Edwards

Developer: Dump JavaScript for faster Web loading | CIO - 0 views

  • Accomplishing the goal of a high-speed, responsive Web experience without loading JavaScript "could probably be done by linking anchor elements to JSON/XML (or a new definition) API endpoints [and] having the browser internally load the data into a new data structure," the proposal states.
  • The browser "then replaces DOM elements with whatever data that was loaded as needed.
  • The initial data and standard error responses could be in header fixtures, which could be replaced later if so desired. "The HTML body thus becomes a templating language with all the content residing in the fixtures that can be dynamically reloaded without JavaScript."
  •  
    "A W3C (World Wide Web Consortium) mailing list post entitled "HTML6 proposal for single-page Web apps without JavaScript" details the proposal, dated March 20. "The overall purpose [of the plan] is to reduce response times when loading Web pages," said Web developer Bobby Mozumder, editor in chief of FutureClaw magazine, in an email. "This is the difference between a 300ms page load vs 10ms. The faster you are, the better people are going to feel about using your Website." The proposal cites a standard design pattern emerging via front-end JavaScript frameworks where content is loaded dynamically via JSON APIs. "This is the single-page app Web design pattern," said Mozumder. "Everyone's into it because the responsiveness is so much better than loading a full page -- 10-50ms with a clean API load vs. 300-1500ms for a full HTML page load. Since this is so common now, can we implement this directly in the browsers via HTML so users can dynamically run single-page apps without JavaScript?" Accomplishing the goal of a high-speed, responsive Web experience without loading JavaScript "could probably be done by linking anchor elements to JSON/XML (or a new definition) API endpoints [and] having the browser internally load the data into a new data structure," the proposal states. The browser "then replaces DOM elements with whatever data that was loaded as needed." The initial data and standard error responses could be in header fixtures, which could be replaced later if so desired. "The HTML body thus becomes a templating language with all the content residing in the fixtures that can be dynamically reloaded without JavaScript." JavaScript frameworks and JavaScript are leveraged for loading now, but there are issues with these, Mozumder explained. "Should we force millions of Web developers to learn JavaScript, a framework, and an associated templating language if they want a speedy, responsive Web site out-of-the-box? This is a huge barrier for beginners, and right n
1 - 20 of 51 Next › Last »
Showing 20 items per page