Skip to main content

Home/ Document Wars/ Group items tagged support

Rss Feed Group items tagged

2More

Microsoft Will Support ODF! But Only If It Doesn't 'Restrict Choice Among Formats' - 0 views

  • By Marbux posted Jun 19, 2007 - 3:16 PM Asellus sez: "I will not say OOXML is easy to implement, but saying ODF is easier to implement just by looking at the ISO specification is a fallacy." I shouldn't respond to trolls, but I will this time. Asellus is simply wrong. Large hunks of Ecma 376 are simply undocumented. And what's more, absolutely no vendor has a featureful app that writes to that format. Not even Microsoft. There's a myth that Ecma 376 is the same as the Office Open XML used by Microsoft. It is not. I've spend a few hundred hours comparing the Ecma 376 specification (the version of OOXML being considered at ISO) to the information about the undocumented APIs used by MS Office 2007 that recently sprung loose in litigation. See http://www.groklaw.net/p...Rpt_Andrew_Schulman.pdf Each of those APIs *should* have corresponding metadata in the formats, but are not in the Ecma 376 specification.
  •  
    Incredible comment by Marbux!  With one swipe he takes out both Ecma 376 and ODF. 

    Microsoft has written a letter claiming that they will support ODF in MSOffice, but only if ISO approves Ecma 376 as a second office suite XML file format standard.  ODF was approved by ISO nearly a year ago.

    Criticizing Ecma 376 is easy.  It was designed to meet the needs of  a proprietary application, MSOffice, and, to meet the needs of the emerging MS Vista Stack of applications that spans desktop to server to device to web platforms.  It's filled with MS platform dependencies that make it impossibly non interoperable with anything not fully compliant with Microsoft owned API's.

    Criticizing ODF however is another matter entirely.  Marbux points to the extremely poor ODF interoperability record.  If MOOXML (not Ecma 376 - since that is a read only file format) is tied to vendor-application specific MSOffice, then ODF is similarly tied to the many vendor versions of OpenOffice/StarOffice.

    The "many vendor" aspect of OpenOffice is somewhat of a scam.  The interoperability that ODF shares across Novell Office, StarOffice, IBM WorkPlace, Red Office, and NeoOffice is entirely based on the fact that these iterations of OpenOffice are based on a single code base controlled 100% by Sun.  Which is exactly the case with MSOffice.  With this important exception - MOOXML (not Ecma 376) is interoperable across the entire Vista Stack!

    The Vista Stack is comprised of Exchange/SharePoint, MS Live, MS Dynamics, MS SQL Server, MS Internet Server, MS Grove, MS Collaboration Server, and MS Active Directory.   Behind these applications sits a an important foundation of shared assets: MOOXML, Smart Documents, XAML and .NET 3.0.  All of which can be worked into third party, Stack dependent applications through the Visual Studio .NET IDE.

    Here are some thoughts i wou
1More

(WO/1995/013585) COMPOUND DOCUMENT FRAMEWORK - 0 views

  • Summary of the Invention It is an. object of the present invention to provide a document processing system in which object-oriented frameworks are utilized to implement particular document processing techniques, including an object-oriented compound document system. These and other objects of the present invention are realized by a document framework which supports at the system level a variety of compound document processing functions. The framework provides system level support of collaboration, linking, eternal undo, and content based retrieval. These and other objects are carried out by system level support of document changes, annotation through model and linking, anchors, model hierarchies, enhanced copy and pasting, command objects, and a generic retrieval framework.
2More

Greg McNevin : Open Document Foundation Abandons Namesake, Closes up Shop - 0 views

  • The decision to go with CDF has left some industry commentators scratching their heads, with arstechnica.com’s Ryan Paul noting that the decision is curious as CDF doesn't support “the full range of functionality required for office compatibility”. Paul does add, however, that the formats broad use of formats such as XHTML and SVG does give it a compelling edge.
  •  
    The W3C's Chris Lilley, IBM and the lawyer for OASIS have been making quite a bit of noise claiming that CDF doesn't support "the full range of functionality required for office compatibility". 

    This a strange claim, especially when considering IBM as the primary source.  CDF WiCD Full 1.0 is a desktop profile for CDF.  Other profiles include WICD Mobile and WICD Core.  The call for implementations for WICD core, mobile and full went out on Monday, November 12, 2007. 

    To understand cdf, one must first get a handle on the terms used to describe cdf technologies.
    ..... CDF= compound document formats
    ..... CDRF= compound document by reference framework
    ..... WICD = Web Integration Compound Document
    ..... CDR using WICD = Compound Document by Reference using a WICD profile, (Core, Full or Mobile)
    ..... Compound Document by Reference Framework 1.0
    ..... WICD Core 1.0
    ..... WICD Mobile 1.0 Profile
    ..... WICD Full 1.0 Profile

    The WICD Full 1.0 Profile is the "DESKTOP" profile for CDF.
    Some interesting Quotes:

    "WICD Full 1.0 is targeted at desktop agents".

    "The WICD Full 1.0 profile is designed to enable rich multimedia content on desktop and high capability handheld agents."

    From the Compound Document by Reference Use Cases and Requirements Version 1.0 :

    "The capability to view documents with preserved formatting, layout, images and graphics and interactive features such as zooming in and out and multi-page handling."

    "
    <
2More

Does ODF 1.2 Metadata Solve the Interop Problem? - Microsoft starts rolling out more O... - 0 views

  • Sorry Shish, you're wrong about ODF 1.2 Try ODF 1.5 or ODF 2.0, maybe. The metadata requirements for ODF 1.2 actually did include two way lossless translation capability. Unfortunately these features did not survive the final cut, and were not included in the April 2007 submission. You might also want to check the February 23, 2007 metadata proposal from Florian Reuter. That also would have delivered the goods and perhaps put ODF that grand convergence category of usefulness across desktops, servers, devices and web systems currently the exclusive domains of MS-OOXML and CDF+. Florian had devised a means of using metadata to describe the presentation aspects of content and structural objects. Very revolutionary. And based on the simple notion that bold, font, margins etc. are simply metadata about content and style objects. Where the train came off the track had to do with the concept of an XML ID means of linking metadata to content. Not that there was anything wrong with this mechanism. It's actually quite clever. What went wrong was that Sun insisted that only those elements approved and supported by OpenOffice would be allowed to make use of XML ID metadata. For independent developers, this is a serious constraint. Because of this constraint, the metatdata sub committee started off with six elements supported by OOo that metadata could be appied to. IBM then came in and asked for eleven more elements having to do with charts and graphs. The OpenOffice crew decided they could support this, so in they went. Then an interesting question was posed, "How are independent developers supposed to submit elements for metadata consideration?"
  •  
    A Second response to Mary Jo's, "Microsoft starts rolling out more OOXML translators" is also posted here. The title is "Standardization by Corporation". Shish-Ka-Bob makes the assertion the ODF 1.2 metadata model will enable lossless two way conversion between MSOffice and ODF. While it's true that that intent was a key component of the original July of 2006 Metadata Requirements, the proposal was eventually stripped from the final submission made in April of 2007. I try to explain to Shish how that came about. The second post here, "Standardization by Corporation", is a follow on to statements made to Shish. The statements have to do with the events at ISO, and what i think will eventually happen. IMHO, ISO will follow either the AFNOR or Brittish proposals to merge ODF and OOXML. To do this they will remove entirely the coproarate vendor influence of Ecma and OASIS, and perfect the merger entirely at ISO. My post just happened to coincide with ISO Governor Mark Bryan's "Standardization by Corporations" letter. A derpressing but nevertheless very true concern. In fact, the OpenDocument Foundation was created specifically to address our concerns about the undue influence big application vendors were exerting on ODF following the April 30th, 2005 approval of ODF 1.0 (which went on to become ISO 26300). ~ge~
1More

ODF 1.2 Metadata? You're Dreaming! Microsoft starts rolling out more OOXML translators... - 0 views

  • Sorry Shish, you're wrong about ODF 1.2 Try ODF 1.5 or ODF 2.0, maybe. The metadata requirements for ODF 1.2 actually did include two way lossless translation capability. Unfortunately these features did not survive the final cut, and were not included in the April 2007 submission. You might also want to check the February 23, 2007 metadata proposal from Florian Reuter. That also would have delivered the goods and perhaps put ODF that grand convergence category of usefulness across desktops, servers, devices and web systems currently the exclusive domains of MS-OOXML and CDF+. Florian had devised a means of using metadata to describe the presentation aspects of content and structural objects. Very revolutionary. And based on the simple notion that bold, font, margins etc. are simply metadata about content and style objects. Where the train came off the track had to do with the concept of an XML ID means of linking metadata to content. Not that there was anything wrong with this mechanism. It's actually quite clever. What went wrong was that Sun insisted that only those elements approved and supported by OpenOffice would be allowed to make use of XML ID metadata. For independent developers, this is a serious constraint. Because of this constraint, the metatdata sub committee started off with six elements supported by OOo that metadata could be appied to. IBM then came in and asked for eleven more elements having to do with charts and graphs. The OpenOffice crew decided they could support this, so in they went. Then an interesting question was posed, "How are independent developers supposed to submit elements for metadata consideration?"
1More

Wizard of ODF: The Foundation on Interop and the List Proposal Vote Deadline - 0 views

  • Oh, my. Both IBM and Sun voted for the proposal that broke the Foundation's plugin that was going to add full-fidelity native ODF file support to Microsoft Office. So it's sounding to me like at least two of the TC members who voted for the Sun/KOffice proposal didn't check in with the ECIS lawyer before they broke interoperability with Microsoft Office. Do you think Microsoft won't use this evidence in the DG Competition antitrust proceeding, Michael? Let's see, you guys are prosecuting Microsoft for not supporting ODF in Microsoft Office while you block Microsoft Office from supporting ODF. Yeah, I think DG Competition is going to hear about this one from Microsoft. They'll probably hear about what you said about compatibility being a trade off too. Oh, yeah. Microsoft's lawyers are going to love this. Look at the ECIS public statement about interoperability's importance.
2More

Re: [office-comment] Public Comment - 0 views

  • Regarding section 1.5 itself: The Open Office TC decided to use the term MAY rather than MUST (or will) at the mentioned location, because it wanted to ensure that the OpenDocument specification can be used by as many implementations as possible. This means that the format should also be usable by applications that only support a very small subset of the specification, as long as the information that these applications store can be represented using the OpenDocument format. A requirement that all foreign elements and attributes must be preserved actually would mean that some applications may not use the format, although the format itself would be suitable. Therefor, we leave it up to the implementations, which elements and attributes of the specification they support, and whether they preserve foreign element and attributes. Some more information about this can be found in appendix D of the specification.
    • Gary Edwards
       
      This OASIS ODF discussion is about the Compliance - conformance clause of the ODF specification: Section 1.5. A developer has complained that use of MAY instead of MUST in the wording of the clause would enable conforming applications to destroy foreign elements and alien attribute markup at will. This of course would result in ZERO Interoeprability!!!!! The foreign elments and alien attributes were included for the purposes of improved ODF compatibility with the billions of MSOffice binary documents that would need to be converted to ODF. Sadly, the section 1.5 loop hole falls short of the compatibility goal, but that only begins to scratch the surface of the ODF problems. OpenOffice only supports foreign elements and alien attributes for text spans, and paragraphs!!!!!! All other such markup is unrecognized and therefore "destroyed" by OpenOffice. ZERO interop. No roundtripping with MSOffice desktops. Lossy conversion with jagged fidelity. Guaranteed.
9More

An Antic Disposition: Asking the right questions about Office 2010's OOXML support - 1 views

    • Alex Brown
       
      ... and we can expect similar censure for people claiming to support "ODF"?
  • Remember, the conformance language of OOXML is so loose that even a shell statement of "cat foo.docx &gt; /dev/null" would qualify as a conformant application.
    • Alex Brown
       
      Think you're confusing ODF and OOXML here Rob; hint - look at OOXML "application descriptions"
  • ...6 more annotations...
  • But that is not what WG4 was recently told in Seattle, where they were told that Office would not write out Strict documents until Office 16
  • In other words, will Office 2010 be "strictly conformant" with the ISO/IEC 29500:2008 standards?
    • Alex Brown
       
      interesting made up concept, this "strictly conformant", for a standard which contains an extensibility mechanism ...
    • Alex Brown
       
      err, news to me ... and I was at the meeting.
  • To do otherwise is to essentially specify a require for the use of Microsoft Office and Microsoft Office alone.
    • Alex Brown
       
      or any of those other applications which support that format (including some from IBM even) ...

They Are the Best Computer Tech Specialists - 1 views

started by shai edrote on 13 Jul 11 no follow-up yet
13More

Dump the file server: Why we moved to the SharePoint Online cloud [review] - 0 views

  • For this article, I wanted to focus on an important aspect of our move to Office 365, and that was our adoption of SharePoint Online as our sole document file server. I know, how passé&nbsp;for me to call it a file server as it represents everything that fixes what plagues traditional file servers and NASes. Let's face it: file servers have been a necessary evil, not a nicety that have enabled collaboration and seamless access to data. They offer superior security and storage space, but this comes at the price of external access and coauthoring functionality. Corporate IT departments have had a band-aid known as VPN for some time now, but it falls short of being the panacea vendors like Cisco make it out to be. I know this well -- I support these kinds of VPNs day to day. Their licensing is convoluted, they're drowning in client application bug hell, and most of all, bound by the performance bottlenecks on either the client or server end.
  • I previously wrote about how my company used to juggle two distinct file storage systems. We had Google Drive as our web-based cloud document platform, buts its penetration didn't go much further than its Google Docs functionality. That's because Google has a love-hate relationship with any Office file that's not a Google Doc. Sure, you can upload it and store it on the service, but the bells and whistles end there. Want to edit it with others? It MUST be converted to Google's format. And so we had to keep a crutch in place for everything else that had to stay in traditional Office formats, either due to customer requirements, complex formatting, or other reasons. That other device for us was a simple QNAP NAS box with 1.5TB of space.
  • I previously wrote about how my company used to juggle two distinct file storage systems. We had Google Drive as our web-based cloud document platform, buts its penetration didn't go much further than its Google Docs functionality. That's because Google has a love-hate relationship with any Office file that's not a Google Doc. Sure, you can upload it and store it on the service, but the bells and whistles end there. Want to edit it with others? It MUST be converted to Google's format.
  • ...9 more annotations...
  • And so we had to keep a crutch in place for everything else that had to stay in traditional Office formats, either due to customer requirements, complex formatting, or other reasons. That other device for us was a simple QNAP NAS box with 1.5TB of space.
  • We liked Google Drive's real time collaboration functionality, but the way it treated non-Docs files was pretty pitiful.
  • Dropbox for Business provides the best headroom for growth, but it's starting monthly price is too much to swallow.
  • And Box and Egnyte don't bring much more to the table besides bona fide cloud storage and sync;
  • SharePoint Online offers a rich ecosystem that we can grow on.
  • For the purpose of running our day to day business needs, SharePoint Online has taken over for both Google Drive and our former NAS alike. We don't have to convert items to and from Google Docs anymore just to collaborate. We have as good, or better, permissions in SharePoint compared to Google Drive. And the search power in SharePoint is disgustingly accurate, providing the accuracy and file previews that we were used to on Google Drive.
  • SharePoint Online is first and foremost a cloud solution that has additional tie-ins with Office Online products, OneDrive, etc that may or may not exist in the on-premise version of the product.
  • It's a cloud file server (the focus of this piece). It's a content search hub. It can run public websites and internal intranets. It can help handle complex document workflows. You can even run Access databases on it.
  • I can finally work as I wish, in-browser or in Office 2013 -- or both at once. My entire company "file server" is synced via OneDrive for Business to my Thinkpad, and likewise, I can edit any files in a browser via Office Online apps. It's a nirvana that Google Drive almost afforded us, if it weren't for Google's distaste of traditional Office files. It's good to know you can have your cake and eat it too.
  •  
    Yesterday Google announced dramatic price reductions for their Cloud Computing platform. This announcement was followed immediately by a similar announcement from Amazon. But what about Microsoft? The truth is that Microsoft doesn't need to reduce prices, and they are forcing both Google and Amazon reductions. My guess is that there are more reductions to come too. The answer is in this review of SharePoint OnLine and Office 365, where the author points out the fact that Google Drive / Apps totally mangles an MSOffice document. Once Google converts the documents, they are useless. "I previously wrote about how my company used to juggle two distinct file storage systems. We had Google Drive as our web-based cloud document platform, buts its penetration didn't go much further than its Google Docs functionality. That's because Google has a love-hate relationship with any Office file that's not a Google Doc. Sure, you can upload it and store it on the service, but the bells and whistles end there. Want to edit it with others? It MUST be converted to Google's format. And so we had to keep a crutch in place for everything else that had to stay in traditional Office formats, either due to customer requirements, complex formatting, or other reasons. That other device for us was a simple QNAP NAS box with 1.5TB of space." In 2006-2007, when we were in the middle of the great ODF vs OOXML document wars, I had a conversation with Google's Open Source - Opoen Standards guru, Chris DiBona. It was during the Massachusetts crisis, and we were trying to garner Google Corporate support for ODF. Chris listened to my pitch and summarized his position that conversion methods were very advanced, and going forward, file formats really didn't matter. He famously said, "Let a thousand formats bloom". I wonder if he still thinks that?
11More

Why Microsoft Azure could have the last laugh in the cloud wars | CITEworld - 0 views

  • Venture capitalist Brad Feld recently wrote an interesting post predicting the end of Amazon's dominance of the cloud computing market, and concluded, "it’s suddenly a good time to be Microsoft or Google in the cloud computing wars." I'd go one step farther. Using Feld's arguments, I'd say that Microsoft is in the driver's seat. More like this The dark side of the cloud price wars between Amazon, Google, and Microsoft The rise, fall, and rehabilitation of Internet Explorer Microsoft, Apple, and Google battle for the mobile enterprise Featured Resource Presented by Citrix Systems 10 essential elements for a secure enterprise mobility strategy Best practices for protecting sensitive business information while making people productive from Learn More First, the price war. Microsoft and Google are on approximately equal ground when it comes to cutting prices -- both have highly profitable core businesses that they can use to subsidize a price war in cloud infrastructure, even to the point of sustaining losses for a while to gain market share. Amazon does not.&nbsp;
  • Second, the quality argument. Like Feld, we've&nbsp;also pointed out that there are niche cloud providers that do a better job than the big guys at providing infrastructure-as-a-service for specific verticals, but when you move all the way up the stack to full software-as-a-service applications, Microsoft has an edge among the big three with Office 365.
  • Google has been making inroads into smaller businesses with Google Apps for almost a decade now, Microsoft remains the standard in the biggest and most profitable business customers -- as this recent investigation from Dan Frommer at Quartz showed, only one company in the Fortune 50 uses Google Apps. (That company happens to be Google itself.)&nbsp;
  • ...8 more annotations...
  • The third argument, support, is mostly a wash. While Amazon's support may be terrible (I have no evidence of this, but I'm taking Feld's word for it), Microsoft and Google and their respective ecosystem partners do a decent job of supporting customers on their stacks.
  • But then comes the fourth argument. Feld points out that once companies get to $200,000 per month of cloud-infrastructure spend, it's actually significantly cheaper to build their own data centers
  • Microsoft is the only one of the big three players with an on-premise offering -- Windows Server and the rest of the Microsoft infrastructure family.&nbsp;Maybe the exact break-even point will change as the cloud price wars continue, but Microsoft has the most pieces customers would need to move from all-cloud to a hybrid or on-premise solution. Or, for that matter, for existing on-premise customers to begin experimenting moving some workloads to the cloud.
  • There's one more point favoring Microsoft. Google's core business is selling online advertising. That business makes up about 90% of Google's revenue, and it has enviably high operating margins -- around 30%, based on Google's 2011 financial report. (I picked 2011 because that was before Google bought Motorola Mobility, which changed the margin structure.)
  • It's unclear how the Google Cloud Platform helps that business. Are customers using Google's cloud somehow more likely to advertise with Google? I don't see it. Are Google advertising customers demanding to run other workloads on Google technology? I don't see it.
  • Meanwhile, while Azure almost certainly offers lower margins than, say, on-premises Windows Servers, it's necessary -- customers are moving workloads to the cloud, and Microsoft needs a competitive offering there to keep them on the Microsoft stack so they continue to buy other Microsoft products. Plus, as I argued in point four, today's Azure customers could become tomorrow's on-premise Microsoft infrastructure customers.
  • In other words, Microsoft Azure and Google Cloud Engine both lower the profit margins of their parent companies. But Azure is clearly strategic while Cloud Engine, as far as I can tell, is not. Who's more likely to keep investing in and improving its cloud?&nbsp;
  • right now, Microsoft's chances look pretty good to me. No wonder they put the cloud guy in charge of the company.
11More

Microsoft pushes Trade Secrets Bill - 1 views

  • A spokesman for the Microsoft On The Issues website has expressed the company’s support for new legislation that would reform the legal framework for companies wishing to protect their trade secrets in a cloud-centric world where such information is frequently forced to reside on networks. In the post Microsoft’s Assistant General Counsel of IP Policy &amp; Strategy Jule Sigall rallies behind business and academic concerns supporting the proposed Defend Trade Secrets Act 2015 (DTSA), which goes before the United States Senate Judiciary Committee today. Sigall, who is also Associate General Counsel for Copyright in Microsoft’s Legal &amp; Corporate Affairs department, makes an ardent case for reform of the current legislation, as furnished by the Uniform Trade Secrets Act (UTSA). UTSA’s provisions are argued to be fractured, and rendered ineffective both by the inability of plaintiffs to pursue suits in federal courts (despite trade secret infractions being Federal by nature), and by the fact that not all states have adopted or instituted all the measures provided by the legislation. Additionally the limited provision for redress in international cases of trade secret theft are to be addressed.
  • Sigall presents the case of Microsoft’s Cortana AI as an example of why new legislation is necessary: ‘[Behind] Cortana sits a vast amount of technology developed or enhanced in-house by Microsoft – voice recognition; language translation; reactive and predictive algorithms that can synthesize context, location and data, and interface with the vast resources of the Bing search engine index; and a complex array of cloud servers to crunch and serve data in real time. This technology represents tens of thousands of hours of research, trial and error, and continued improvement as Cortana is adapted for new devices and new scenarios’
  • Sigall argues that better protection procedures for trade secrets, the only form of IP which currently lacks comprehensive cover in law, is essential for start-ups whose ideas, business plans and even customer lists may constitute the only marketable value of a company that is just in the stage of consolidating. ‘A trade secret is unique among forms of intellectual property in how it is legally protected. While it is a federal crime to steal a trade secret, a business that has its trade secrets stolen must rely on state law to pursue a civil remedy. Owners of copyrights, patents, and trademarks can go to federal court to protect their property and seek damages when their property has been infringed, but trade secret owners do not have access to such a federal remedy.’
  • ...7 more annotations...
  • Defend Trade Secrets Act 2015 contains [PDF] significant material from its doomed predecessor of 12 months ago, and one of its boldest initiatives is the extension of ex parte seizures, instituted in UTSA in a more limited form (particularly in the 1985 amendment to the Uniform Law Commission’s 1979 initial legislation). An ex parte seizure provides a kind of restraining order or injunction on disputed information, or even the dissemination of knowledge about whether the information is disputed, and places it under federal protection on the plaintiff’s behalf.
  • Microsoft had a hard time adjusting to the open source revolution, particularly in regard to the PC/Mac Office product which at one time represented the most successful and ubiquitous software in the world, and the many legal and semantic wrangles over the closed-source nature of Office formats such as Word led ultimately to a hybridised open source .docx format which is still argued to not be the OpenXML that was promised.
  • According to Sigall the state-by-state system currently in place was ‘simply not built with the digital world in mind’, and calls for ‘A uniform, national standard for protection’ which does not stop at state lines or even national borders.
  • In practical terms this seems likely to extend the circumstances under which information about leaks, hacks or thefts of information can be made the subject of gag orders for legal reasons, since it brings trade secrets into the same legal framework as other forms of intellectual property which enjoy more comprehensive coverage and recourse in law. The bill would also extend the purview of the 1996 Economic Espionage Act to take in a more rigorously conceived concept of ‘trade secrets’.
  • Even with the issues clear, the risk of disproportionate or over-reaching response in the event of the new bill passing successfully through congress in 2016 (it is unlikely to pass this year) is clear enough that the lack of network discussion about it is quite surprising. Essentially DTSA represents the same kind of proposed ‘judicial fast track’ – though in favour of corporations instead of governments – that has outraged so many commenters in the wake of the November 13th Paris attacks.
  • Silence in court Amongst its more quotidian clauses, the Defend Trade Secrets Act 2015 effectively offers corporate plaintiffs increased opportunity to federalise disputed private material in cases involving trade secrets, with all the penalties for infraction associated with that change of status – and far greater scope for sub judice orders likely to contain and conceal future breaches of information.
  • Eric Goldman of the Santa Clara University School of Law has just published a paper outlining the risks of extending ex parte seizures in the manner that DTSA 2015 proposes. Goldman writes that ‘the Seizure Provision does not solve many, if any, problems. In light of the remedies already available to trade secret owners in ex parte temporary restraining orders (TROs), the Seizure Provision purports to apply to only a narrow set of additional circumstances. In exchange for that modest benefit, the Seizure Provision creates the risk of anti-competitive seizures and seizures that cause substantial collateral damage to innocent third parties. To discourage such abuses, the Act imposes procedural safeguards and creates a cause of action for wrongful seizures. Unfortunately, those safeguards are miscalibrated to achieve the desired protections against abusive seizures.’
  •  
    Lots of possible Constitutional issues lurking. The Constitution creates only two types of intellectual property, patents and copyrights. "(P)roperty interests . . . are not created by the Constitution. Rather, they are created and their dimensions are defined by existing rules or understandings that stem from an independent source such as state law." Ruckelshaus v. Monsanto Co., 467 US 986 (1984), https://goo.gl/ZljO1H (trade secrets case). The traditional source of rights in trade secrets have been state law. Thus there is a state's rights issue lurking in this legislation, a question whether the federal government is invading the States' police power, an "our federalism" question.
2More

The Document Foundation, LibreOffice and OOXML - The Document Foundation Wiki - 1 views

  • Why does LibreOffice offer to read, edit and save documents in OOXML? Just like OpenOffice.org, LibreOffice lets its users handle documents in the format used by Microsoft Office 2007 and 2010. It is important to understand that these formats, also called OOXML are in fact somewhat different from the ISO standard bearing the same name; in fact it is unclear whether anyone is able to implement the ISO standard. To avoid confusion, we will refer to the Microsoft formats produced by Microsoft Office as Microsoft Open XML (MOX) hereafter. To enable data interchange, LibreOffice and OpenOffice.org before it, has traditionally engaged with the reality of a world filled with data in many, less than ideal formats. Our users are used to exchanging data bi-directionally between many proprietary formats, and their Free Software equivalents. Indeed there are few choices for a non-dominant player to deliberately shun inter-operating, and remain relevant.
  • Don't you feel as if you are betraying Free and Open Source Software, as well as Open Standards such as ODF? No. And if we felt that way, we would take immediate action to remove the full stack. What we are offering our users is convenience; if we didn't offer these features we would not be serving users and we would get daily messages requesting the support of the new Microsoft Office formats. Besides, the same reasoning applies to the old Microsoft Office formats we support; and while it was thought for a while it was possible to prevent people from using these formats or even buying Microsoft Office, it turned out that it was not possible. We do believe, however, that by offering a full-featured and innovative office suite that exists among a rich and diverse ODF ecosystem, ODF shall prevail in the end.
1More

ODF vs. OOXML: War of the Words | Andrew Updegrove: Tales of Adversego - 0 views

  •  
    "For some time I've been considering writing a book about what has become a standards war of truly epic proportions.  I refer, of course, to the ongoing, ever expanding, still escalating conflict between ODF and OOXML, a battle that is playing out across five continents and in both the halls of government and the marketplace alike.  And, needless to say, at countless blogs and news sites all the Web over as well. Arrayed on one side or the other, either in the forefront of battle or behind the scenes, are most of the major IT vendors of our time.  And at the center of the conflict is Microsoft, the most successful software vendor of all time, faced with the first significant challenge ever to one of its core businesses and profit centers - its flagship Office productivity suite. The story has other notable features as well:  ODF is the first IT standard to be taken up as a popular cause, and also represents the first "cross over" standards issue that has attracted the broad support of the open source community.  Then there are the societal dimensions: open formats are needed to safeguard our culture and our history from oblivion.  And when implemented in open source software and deployed on Linux-based systems (not to mention One Laptop Per Child computers), the benefits and opportunities of IT become more available to those throughout the third world. There is little question, I think, that regardless of where and how this saga ends, it will be studied in business schools and by economists for decades to come.  What they will conclude will depend in part upon the materials we leave behind for them to examine.  That's one of the reasons I'm launching this effort now, as a publicly posted eBook in progress, rather than waiting until some indefinite point in the future when the memories of the players in this drama have become colored by the passage of time and the influence of later events. My hope is that those of you who have played or are n
50More

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
1More

The End of ODF &amp; OpenXML - Hello ODEF! - 0 views

  •  
    Short slide deck of Barbara Held's February 28th, 2007 EU IDABC presentation. She introduces ODEF, the "Open Document Exchange Format" which is designed to replace both ODF and OpenOfficeXML. ComputerWorld recently ran a story about the end of ODF, as they covered the failure of six "legislative" initiatives designed to mandate ODF as the official file format. While the political treachery surrounding these initiatives is a story in and of itself, the larger story, the one that has world wide reverberations, wasn't mentioned. The larger ODF story is that ODF vendors are losing the political battles because they are unable to provide government CIO's with real world solutions. Here are three quotes from the California discussion that really say it all: "Interoperability isn't just a feature. It's the basic requirement for getting your XML file format and applications considered"..... "The challenge is that of migrating our existing documents and business processes to XML. The question is which XML? OpenDocument or OpenXML?" ....... "Under those conditions, is it even possible to implement OpenDocument?" ....... Bill Welty, CIO California Air Resource Board wondering if there was a way to support California legislative proposal AB-1668. This is hardly the first time the compatibility-interoperability issue has challenged ODf. Massachusetts spent a full year on a pilot study testing the top tier of ODF solutions: OpenOffice, StarOffice, Novell Office and IBM's WorkPlace (prototype). The results were a disaster for ODF. So much so that the 300 page pilot study report and accompanying comments wiki have never seen the light of day. In response to the disastrous pilot study, Massachusetts issued their now infamous RFi; a "request for information" about whether it's possible or not to write an ODF plugin for MSOffice applications. The OpenDocument Foundation responded to the RFi with our da Vinci plugin. The quick descriptio
3More

Odf - Converters &amp; the ODF Zero Interop problem - 0 views

  • The ODF-Converter translates OpenXML documents (.DOCX) to Open Document Format (.Open Document Format) (and conversely) for Open XML processing applications. You will find below the list of unsupported features which may be due to standard compatibility issues, or to the translator itself (see rendering issues as discussed in the blog)...
  •  
    Explosive compatibility - interoperability study concerning ODF and MOOXL!  This has Florian's signature written all over it, and it goes right to the heart of the matter.

    David A. Wheeler submitted a comment to the OASIS ODF TC outlining his concerns with this publication.  He suggests that a few minor changes to ODF could greatly improve compatibility - interop issues.  He also figures out that OpenOffice - ODF has more features than MSOffice - MOOXML.  Wha the doesn't ge is that it is these new and innovative features that continue to increase the difficulties of implementing ODF in real world business process workgroups!

    David also ignores the fact that the TC jus tvoted down the Novell "LIt Enhancement Proposal" which was specifically designed to address the compatibility - interop issues outlined in this odf-converter blog!  Given a choice, the ODF TC members chose the new and innovative features of the interop breaking Sun-KOffice "List Enhancement Proposal".   

    The List Enhancement Proposal discussion was so contentious and focused on personal destruction as to represent a total break down of the ODF concensus process.  There is no way that either the Foundation or Novell will ever contribute another compatibility - interop enhancement proposal given the personal assault and determined oppostion of Sun to compatibility - interoperability initiatives.

    The hard lesson the Foundation learned is that if you oppose Sun, you'll get booted out of OASIS!

    The lesson Novell learned is that they are better off working through Ecma 376 to resolve these issues that the public demands be addressed.

    Notice the last line in David's comment, "In any case, the MUCH, MUCH longer list of problems with Microsoft XML format isn't our problem." 

    During the contentious List Enhancement Proposal and the compatibility - interop related Metadata RDF/XML discussions, ODF members freque
  •  
    These are the same guys who just voted against the Novell List Enhancement Proposal that did exactly what the odf-converter blog claims needs to be done if the compatibility-interop problems are to be resolved!
2More

The Age of OOXML Computing - thanks a pant load Sun! - 0 views

  • Why does Microsoft want another standard, what's the rationale? There are at least 4 good reasons why: *ODF started out and was completed as an XML format, specifically supporting OpenOffice with a tight scope around that product. *It wasn't until 2005 that the spec was offered up as a general XML office document format and consequently renamed to ODF. *No opportunity existed for Microsoft to actually participate in this full process - given the original scope, the 6 months between the re-naming of the spec to ODF, and its subsequent approval by OASIS as a standard. *The scope of the ODF spec never included even the basic requirements that Microsoft required to support a fully open format, and nor did the OASIS technical committee want to include these requirements.
  •  
    Erwin's StarOffice Tango has an exhaustive response to this Microsoft Q&A. Correcting false statements by Microsoft
2More

MSFT: Let's Do VHS Versus Betamax All Over - 0 views

  • “You want the customers to vote with their wallets,” Hilf said.”The most healthy market environment is where there is competition.”
  •  
    Another great commentary from Walt Hucks.  This time his target is the over the top self serving statements from Microsoft about consumers wanting "competition" between standards.  I was a loser in the BetaMAX wars.  At least SONY had an edge in those wars of claiming a somewhat, although leglible, advantge in video fidleity.  Microsoft OOXML has no such advantage over ODF.  None whatsoever.

    Walt once again exposes Microsoft as the company you can count on to treat customers as mindless idiots.    Given the choice, customers would choose ODF over OOXML hands down, time after tiem, every time.  But they are not given "the choice".  Mcirosoft spends a lot of spin cycles complaining and whining about IBM and others not providing native support for OOXML.  Incredibly, they do this while announcing loudly that they have no intention of ever supporting ODF nativiely in their own applications?  What's u with that?  Leveraging the monopoly.  That's what.

2More

We've Been Had! - 0 views

  • There is nothing open about MOOXML, and it should have never made it to consideration as an international standard. But one has to ask, what is up with Sun? The John Bosak comment is just as much cause for concern as the fact that the nations of the world would dare consider OOXML as an international standard. All i can say is that we've been had. Sun and Microsoft have worked us royally, and only now, at the last moment, does the fog of confusion clear and we can see it all.
  •  
    Yeah.  I said this!  And i still think ODF has what it takes to become a universal file format.  But only if the "interoperability enhancment" proposals are made part of the specification.  You can't talk your way to universal interop.   It has to go into the spec!

    OBTW, for you idiots who think i support OOXML as a standard?  You're idiots.  I support the quest for a universal file format that is totally application, platform and vendor independent.  The requirements, demands and criticisms we make of OOXML should be applied to every file format up for universal file format consideration.  Including ODF.  Including XHTML+ (XHTML, CSS3, RDF).  Including the EU IDABC "ODEF".

    The one area where i differ from most universal interoperability seekers is that i fully believe the big vendors have left open a loop hole we can exploit.  The plugin architecture is fully able to convert a big vendors application to produce our beloved but elusive universal file format. 

    This is important because the big vendors control "interoperability" by contolling the big vendor standards consortia, and, the major applications.  It's a double edged sword.

    The ubiquitous plugin architecture enables universal interop seekers to exploit the applications any way we want.  What's missing is a truly open "universal" standards process that is outside the reach of big vendors. 

    Personally i like the recent GPL3 process as a model on which to base emerging universal standards work.  Somehow the big vendors must be neutralized.  Otherwise, we;ll never see the universal inteop the world so desires.

    idiots,
    ~ge~

« First ‹ Previous 41 - 60 of 140 Next › Last »
Showing 20 items per page