Skip to main content

Home/ Future of the Web/ Group items tagged HTML open-web

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gary Edwards

ptsefton » OpenOffice.org is bad for the planet - 0 views

  •  
    ptsefton continues his rant that OpenOffice does not support the Open Web. He's been on this rant for so long, i'm wondering if he really thinks there's a chance the lords of ODF and the OpenOffice source code are listening? In this post he describes how useless it is to submit his findings and frustrations with OOo in a bug report. Pretty funny stuff even if you do end up joining the Michael Meeks trek along this trail of tears. Maybe there's another way?

    What would happen if pt moved from targeting the not so open OpenOffice, to target governments and enterprises trying to set future information system requirements?

    NY State is next up on this endless list. Most likely they will follow the lessons of exhaustive pilot studies conducted by Massachusetts, California, Belgium, Denmark and England, and end up mandating the use of both open standard "XML" formats, ODF and OOXML.

    The pilots concluded that there was a need for both XML formats; depending on the needs of different departments and workgroups. The pilot studies scream out a general rule of thumb; if your department has day-to-day business processes bound to MSOffice workgroups, then it makes sense to use MSOffice OOXML going forward. If there is no legacy MSOffice bound workgroup or workflow, it makes sense to move to OpenOffice ODF.

    One thing the pilots make clear is that it is prohibitively costly and disruptive to try to replace MSOffice bound workgroups.

    What NY State might consider is that the Web is going to be an important part of their informations systems future. What a surprise. Every pilot recognized and indeed, emphasized this fact. Yet, they fell short of the obvious conclusion; mandating that desktop applications provide native support for Open Web formats, protocols and interfaces!

    What's wrong with insisting that desktop applciations and office suites support the rapidly advancing HTML+ technologies as well as the applicat
Gary Edwards

The Open Web: Next-Generation Standards Support in WebKit/ Safari - 0 views

  •  
    Apple has posted an interesting page describing Safari technologies. Innovations and support for existing standards as well as the ACID3 test are covered.

    Many people think that the Apple WebKit-Safari-iPhone innovations are pushing Open Web Standards beyond beyond the limits of "Open", and deep into the verboten realm of vendor specific extensions. Others, myself included, believe that the WebKit community has to do this if Open Web technologies are to be anyway competitive with Microsoft's RiA (XAML-Silverlight-WPF).

    Adobe RiA (AiR-Flex-Flash) is also an alternative to WebKit and Microsoft RiA; kind of half Open Web, half proprietary though. Adobe Flash is of course proprietary. While Adobe AiR implements the WebKit layout engine and visual document model. I suspect that as Adobe RiA loses ground to Microsoft Silverlight, they will open up Flash. But that's not something the Open Web can afford to wait for.

    In many ways, WebKit is at the cutting edge of Ajax Open Web technologies. The problems of Ajax not scaling well are being solved as shared JavaScript libraries continue to amaze, and the JavaScript engines roar with horsepower. Innovations in WebKit, even the vendor-device specific ones, are being picked up by the JS Libraries, Firefox, and the other Open Web browsers.

    At the end of the day though, it is the balance between the ACiD3 test on one side and the incredible market surge of WebKit smartphones, countertops, and netbook devices at the edge of the Web that seem to hold things together.

    The surge at the edge is washing back over the greater Web, as cross-browser frustrated Web designers and developers roll out the iPhone welcome. Let's hope the ACiD3 test holds. So far it's proving to be a far more important consideration for maintaining Open Web interop, without sacrificing innovation, than anything going on at the stalled W3C.

    "..... Safari continues to lead the way, implementing
Gary Edwards

Microsoft's Quest for Interoperability and Open Standards - 0 views

  •  
    Interesting article discussing the many ways Microsoft is using to improve the public perception that they are serious about interoperability and open formats, protocols and interfaces. Rocketman attended the recent ISO SC34 meeting in Prague and agrees that Microsoft has indeed put on a new public face filled with cooperation, compliance and unheard of sincerity.

    He also says, "Don't be fooled!!!"

    There is a big difference between participation in vendor consortia and government sponsored public standards efforts, and, actual implementation at the product level. Looking at how Microsoft products implement open standards, my take is that they have decided on a policy of end user choice. Their applications offer on the one hand the choice of aging, near irrelevant and often crippled open standards. And on the other, the option of very rich and feature filled but proprietary formats, protocols and interfaces that integrate across the entire Microsoft platform of desktop, devices and servers. For instance; IE8 supports 1998 HTML-CSS, but not the advanced ACiD-3 "HTML+" used by WebKit, Firefox, Opera and near every device or smartphone operating at the edge of the Web. (HTML+ = HTML5, CSS4, SVG/Canvas, JS, JS Libs).

    But they do offer advanced .NET-WPF proprietary alternative to Open Web HTML+. These include XAML, Silverlight, XPS, LINQ, Smart Tags, and OOXML. Very nice.

    "When an open source advocate, open standards advocate, or, well, pretty much anyone that competes with Microsoft (news, site) sees an extended hand from the software giant toward better interoperability, they tend to look and see if the other hand's holding a spiked club.

    Even so, the Redmond, WA company continues to push the message that it has seen the light regarding open standards and interoperability...."

Gary Edwards

Why Kindle Should Be An Open Book - Tim O'Reilly at Forbes.com - 0 views

  •  
    Like someone finding out that the rapture has happened, and they've been left behind, Tim O'Reilly shakes his fists and shouts to the heavens that Amazon must support open standards. He argues that in-spite of incredible market success, the Amazon Kindle will fail because the document format is not Open. He even argues that Apple, with the iPod and iPhone, have figured out how to blend Open Web formats and application development with proprietary hardware initiatives....

    "The Amazon Kindle has sparked huge media interest in e-books and has seemingly jump-started the market. Its instant wireless access to hundreds of thousands of e-books and seamless one-click purchasing process would seem to give it an enormous edge over other dedicated e-book platforms. Yet I have a bold prediction: Unless Amazon embraces open e-book standards like epub, which allow readers to read books on a variety of devices, the Kindle will be gone within two or three years."

    TO points to ePub as an open format, apparently not realizing the format falls far short of Open Web advances designed to enable a complete publication-typesetter model. The WebKit and Mozilla open source communities are pushing the envelope of Open Web development with an extremely advanced document model based on HTML5, CSS3, SVG/Canvas, and JavaScript4+. ePub on the other hand is stuck in 1998, supporting the aging HTML4 - CSS2.1 specs. Very sad.

Gary Edwards

Readium at the London Book Fair 2014: Open Source for an Open Publishing Ecosystem: Rea... - 0 views

  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-Android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-Android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
Gary Edwards

Good News for Ajax and the Open Web - The Browser Wars Are Back - 0 views

  • For much of this decade, Web browsing has been dominated by Microsoft's Internet Explorer (IE), which at its height achieved market share numbers approaching 95%, with the result that Microsoft owned a de facto standard for the Web and held effective veto power over the future of HTML. During much of this period, Microsoft suspended development of IE, with the result that virtually no new features appeared within the world's dominant browser from 2001 to 2006. But while IE was sleeping, one of the biggest phenomena of the computer age happened: Ajax. Clever Web developers discovered gold in them there mountains. Using Ajax techniques, Web developers could create desktop-like rich user interfaces right in the browser. Not only that, Ajax was evolutionary. Ajax offered an incremental path from the industry's existing HTML-based infrastructure and know-how, allowing Web developers to add rich Ajax elements to an existing HTML page.
  • A companion community effort helping to accelerate the adoption of open standards is the Web Standards Project (http://www.webstandards.org), which is producing a set of "acid tests" that verify browser support for Open Web technologies, such as HTML, CSS and JavaScript. Acid2 is focused mainly on CSS support, and is now supported by Opera, Safari/WebKit, and IE. Acid3 (http://www.webstandards.org/action/acid3) tests DOM scripting, CSS rendering,
    • Gary Edwards
       
      The amazing thing about Ajax and the Open Web is the way WHATWG, WebKit, and the Web Standards "ACID" work has accelerated Open Web Standards, pushing far beyond the work of the glacial W3C.
  • Runtime Advocacy Task Force
  •  
    Lengthy artilce from the OpenAjax Alliance summarizing HTML, Ajax and the future of the Open Web. Very well referenced. Lots of whitepapers and links
  •  
    good summarization of the Open Web future.
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Paul Merrell

Microsoft starts distributing open-source Drupal | The Open Road - The Business and Pol... - 0 views

  • The single biggest distributor of Drupal just might be Microsoft. As I discovered from Dries Buytaert's blog on Wednesday, Microsoft's Web Application Installer comes with out-of-the-box support for Drupal, OScommerce, and other popular open-source Web applications. The Web Application Installer Beta is designed to help get you up and running with the most widely used Web applications freely available for your Windows Server. Web AI provides support for popular ASP.net and PHP Web applications, including Graffiti, DotNetNuke, WordPress, Drupal, OSCommerce, and more. With just a few simple clicks, Web AI will check your machine for the necessary prerequisites, download these applications from their source location in the community, walk you through basic configuration items, and then install them on your computer.
  •  
    Microsoft attempts to co-opt the FOSS web app scene with a new installer. Will this Microsoft action will cause the FOSS community to make it easier to install web apps on Linux? At present, some Linux distribution repositories include installer packages for a very few, very popular web applications such as Mediawiki. Many web apps require expertise with the LAMP stack to install and resolve often complex dependencies and configuration details, perhaps most importantly security details. Documentation tends to be very poor for FOSS web apps, assuming knowledge most software users lack. Will this Microsoft move trigger a web app installer war with the FOSS community? Stay tuned.
Paul Merrell

Can Dweb Save The Internet? 06/03/2019 - 0 views

  • On a mysterious farm just above the Pacific Ocean, the group who built the internet is inviting a small number of friends to a semi-secret gathering. They describe it as a camp "where diverse people can freely exchange ideas about the technologies, laws, markets, and agreements we need to move forward.” Forward indeed.It wasn’t that long ago that the internet was an open network of computers, blogs, sites, and posts.But then something happened -- and the open web was taken over by private, for-profit, closed networks. Facebook isn’t the web. YouTube isn’t the web. Google isn’t the web. They’re for-profit businesses that are looking to sell audiences to advertisers.Brewster Kahle is one of the early web innovators who built the Internet Archive as a public storehouse to protect the web’s history. Along with web luminaries such as Sir Tim Berners-Lee and Vint Cerf, he is working to protect and rebuild the open nature of the web.advertisementadvertisement“We demonstrated that the web had failed instead of served humanity, as it was supposed to have done,” Berners-Lee told Vanity Fair. The web has “ended up producing -- [through] no deliberate action of the people who designed the platform -- a large-scale emergent phenomenon which is anti-human.”
  • o, they’re out to fix it, working on what they call the Dweb. The “d” in Dweb stands for distributed. In distributed systems, no one entity has control over the participation of any other entity.Berners-Lee is building a platform called Solid, designed to give people control over their own data. Other global projects also have the goal of taking take back the public web. Mastodon is decentralized Twitter. Peertube is a decentralized alternative to YouTube.This July 18 - 21, web activists plan to convene at the Decentralized Web Summit in San Francisco. Back in 2016, Kahle convened an early group of builders, archivists, policymaker, and journalists. He issued a challenge to  use decentralized technologies to “Lock the Web Open.” It’s hard to imagine he knew then how quickly the web would become a closed network.Last year's Dweb gathering convened more than 900 developers, activists, artists, researchers, lawyers, and students. Kahle opened the gathering by reminding attendees that the web used to be a place where everyone could play. "Today, I no longer feel like a player, I feel like I’m being played. Let’s build a decentralized web, let’s build a system we can depend on, a system that doesn’t feel creepy” he said, according to IEEE Spectrum.With the rising tide of concerns about how social networks have hacked our democracy, Kahle and his Dweb community will gather with increasing urgency around their mission.The internet began with an idealist mission to connect people and information for good. Today's web has yet to achieve that goal, but just maybe Dweb will build an internet more robust and open than the current infrastructure allows. That’s a mission worth fighting for.
Gary Edwards

Siding with HTML over XHTML, My Decision to Switch - Monday By Noon - 0 views

  • Publishing content on the Web is in no way limited to professional developers or designers, much of the reason the net is so active is because anyone can make a website. Sure, we (as knowledgeable professionals or hobbyists) all hope to make the Web a better place by doing our part in publishing documents with semantically rich, valid markup, but the reality is that those documents are rare. It’s important to keep in mind the true nature of the Internet; an open platform for information sharing.
  • XHTML2 has some very good ideas that I hope can become part of the web. However, it’s unrealistic to think that all web authors will switch to an XML-based syntax which demands that browsers stop processing the document on the first error. XML’s draconian policy was an attempt to clean up the web. This was done around 1996 when lots of invalid content entered the web. CSS took a different approach: instead of demanding that content isn’t processed, we defined rules for how to handle the undefined. It’s called “forward-compatible parsing” and means we can add new constructs without breaking the old. So, I don’t think XHTML is a realistic option for the masses. HTML 5 is it.
    • Gary Edwards
       
      Great quote from CSS expert Hakon Wium Lie.
  • @marbux: Of course i disagree with your interop assessment, but I wondered how it is that you’re missing the point. I think you confuse web applications with legacy desktop – client/server application model. And that confusion leads to the mistake of trying to transfer the desktop document model to one that could adequately service advancing web applications.
  •  
    A CMS expert argues for HTML over XHTML, explaining his reasons for switching. Excellent read! He nails the basics. for similar reasons, we moved from ODF to ePUB and then to CDf and finally to the advanced WebKit document model, where wikiWORD will make it's stand.
  •  
    See also my comment on the same web page that explains why HTML 5 is NOT it for document exchange between web editing applications. .
  •  
    Response to marbux supporting the WebKit layout/document model. Marbux argues that HTML5 is not interoperable, and CSS2 near useless. HTML5 fails regarding the the interop web appplications need. I respond by arguing that the only way to look at web applications is to consider that the browser layout engine is the web application layout engine! Web applications are actually written to the browser layout/document model, OR, to take advantage of browser plug-in capabilities. The interoperability marbux seeks is tied directly to the browser layout engine. In this context, the web format is simply a reflection of that layout engine. If there's an interop problem, it comes from browser madness differentials. The good news is that there are all kinds of efforts to close the browser gap: including WHATWG - HTML5, CSS3, W3C DOM, JavaScript Libraries, Google GWT (Java to JavaScript), Yahoo GUI, and the my favorite; WebKit. The bad news is that the clock is ticking. Microsoft has pulled the trigger and the great migration of MSOffice client/server systems to the MS WebSTack-Mesh architecture has begun. Key to this transition are the WPF-.NET proprietary formats, protocols and interfaces such as XAML, Silverlight, LINQ, and Smart Tags. New business processes are being written, and old legacy desktop bound processes are being transitioned to this emerging platform. The fight for the Open Web is on, with Microsoft threatening to transtion their entire business desktop monopoly to a Web platfomr they own. ~ge~
Paul Merrell

Cover Pages: Open Web Foundation Formed to Support Community Specification Development. - 0 views

  • The formation of the Open Web Foundation (OWF) was announced on July 24, 2008 at the OSCON 2008 Conference. OWF is "applying the open source model of seeing a common pain point and trying to patch the system by creating an 'organizational library' that makes it easier to go through a collaborative specification process and come out of it with clean IPR, leading to faster implementation and adoption.
  • According to the OWF web site: "The Open Web Foundation is an independent non-profit dedicated to the development and protection of open, non-proprietary specifications for web technologies. It is an attempt to create a home for community-driven specifications. Following the open source model similar to the Apache Software Foundation, the foundation is aimed at building a lightweight framework to help communities deal with the legal requirements necessary to create successful and widely adopted specification. The foundation is trying to break the trend of creating separate foundations for each specification, coming out of the realization that we could come together and generalize our efforts... The Open Web Foundation is made up of individuals who believe that the open web is built on technologies that are created in the open by a diversity of contributors, and which free to be used and improved upon without restriction."
Gonzalo San Gil, PhD.

The open web's guardians are acting like it's already dead / Boing Boing - 0 views

  •  
    "The World Wide Web Consortium -- an influential standards body devoted to the open web -- used to make standards that would let anyone make a browser that could view the whole Web; now they're making standards that let the giant browser companies and giant entertainment companies decide which browsers will and won't work on the Web of the future. "
  •  
    "The World Wide Web Consortium -- an influential standards body devoted to the open web -- used to make standards that would let anyone make a browser that could view the whole Web; now they're making standards that let the giant browser companies and giant entertainment companies decide which browsers will and won't work on the Web of the future. "
Gary Edwards

Sun pitches new cloud as 'Open Platform' * - 0 views

  •  
    Sun takes on the problem of interoperability and portability of applications in a world where there will be many many clouds. At the roll out of the Sun Cloud, key executives explain Sun's implementation of Open Cloud API's and what they see as a pressing need for management tools that will allow some standardization across clouds.

    Sun's Open Cloud API plan is a clean reuse of existing Open Web API's.

    "..... The underpinning of the Open Cloud Platform that Sun will be pitching to developers is a set of cloud APIs, the creation of which is focused under Project Kenai and which has been released under a Community Commons open source license. Sun wants lots of feedback on the APIs and wants these APIs to become a standard too, hence the open license. These APIs describes how virtual elements in a cloud are created, started, stopped, and hibernated using HTTP commands such as GET, PUT, and POST...."

    "...... The upshot is that these APIs will allow programmatic access to virtual infrastructure from Java, PHP, Python, and Ruby and that means system admins can script how virtual resources are deployed. The APIs, as co-creator Tim Bray explains in his blog, are written in JavaScript Object Notation (JSON), not XML. The Q-Layer software is a graphical representation of what is going on down in the APIs, and you can moving virtual resources into the cloud with a click of a mouse using the dashboard or programmatically using the APIs from those four programming languages listed above. (PHP support is not yet available, but will be)....."
  •  
    I can see why Sun picked those four languages first. Can I assume that with a bit of work, this API will be usable from any language with a C "foreign function interface", such as Perl, Common Lisp, Bourne shell, Squeak Smalltalk, and others that your server application might be written in?
  •  
    I read this comment that largely answers my question at: http://www.tbray.org/ongoing/When/200x/2009/03/16/Sun-Cloud "So right now JSON out of a shell tool is not so good. More things like this will create pressure for development of tools to change that, but years of widespread XML/HTML deployment have only produced a few oddly maintained tools. Perhaps that's because you can scrape quite a bit of the web with a couple sed passes, and if I were to have to deal with the mentioned tools, that's probably the route I'd take." (seth w. klein) In other words, with a bit of work, _anything_ that can talk text over HTTP can do this with a bit of work, but an object-oriented is likely to be more at home with JSON (JavaScript Object Notation)
Gary Edwards

Cocoa for Windows + Flash RiA Killer = SproutCore JavaScript Framework - RoughlyDrafted... - 0 views

  • SproutCore brings the values of Leopard’s Cocoa to the web, domesticating JavaScript into a functional application platform with lots of free built-in support for desktop features. Being based on open web standards and being open source itself means SproutCore will enable developers to develop cross platform applications without being tied to either a plugin architecture or its vendor. Sitting on top of web standards will also make it easy for Apple and the community to push SproutCore ahead without worrying about incompatible changes to the underlying layers of Windows, a significant problem for the old Yellow Box or some new Cocoa analog. SproutCore also lives in a well known security context, preventing worries about unknown holes being opened up by a new runtime layer.
  •  
    The story of Javascript and the browser as a RiA competitor continues to unfold. This lengthy summation from roughlydrafed is perhaps the best discussion 'i've ever seen of technologies that will drive the Future of the Open Web. Roughly believes that Apple and Google are fighting for an Open Web Future, with Adobe and Microsoft RiA jousting for a broken web where they dominate the application development. For usre the web is moving to become an application platform. The question is one of who will own the dominant API, and be in position to impose a global platform tax. This is a great summary demanding a careful read. It also confirms my belief that the WebKit layout and document model is the way forward. It's by far and away the best (X)HTML-CSS-DOM-JavaScript model out there. The W3C alternatives do not include JavaScript, and that pretty much seals their fate. And while there are many JavaScript libraries and frameworks to chose from, i would pay close attention to three initiatives: WebKit SproutCore, Gecko jQuery, and Google GWT. ~ge~
  •  
    Live Roulette from Australia, Fun and Free! Now you can play Real "www.funlivecasino.com.au" Live Roulette for Fun in Australia on a brand new website, FunLiveCasino.com.au. Using the latest internet streaming technologies, Fun Live Casino lets you join a real game happening on a real table in a real casino, all broadcast Live! You can see other real players in the casino betting on the same results you do giving you ultimate trust in the results as they are not generated 'just for you', like other casino gaming products such as 'live studios' or computer generated games. Its amazing to think next time your really in the casino that you might be on camera, and people online might be watching! The future is scary! Imagine that one day soon this will be the only way people would gamble online because the internet is full of scams, you have to be super careful, and why would you play Online Roulette any other way except from a Real Casino you can visit, see, hear and trust! Amazingly this site is completely Free and has no registration process, no spam, no clicks and no fuss. Just Instant Fun "www.funlivecasino.com.au" Free Live Roulette! Give it a try, its worth checking out! "www.funlivecasino.com.au" Australia's Online Fun Live Casino! Backlink created from http://fiverr.com/radjaseotea/making-best-156654-backlink-high-pr
Gary Edwards

Does "A VC" have a blind spot for Apple? « counternotions on Flash, WebKit an... - 0 views

  •  
    Flash versus Open: Perhaps one thing we can all agree on is that the future of the web, mobile or otherwise, will be more or less open. That would be HTML, MP3, H.264, HE-AAC, and so on. These are not propriatery Adobe products, they are open standards…unlike Flash. In confusing codecs with UI, Wilson keeps asking, "why is it tha[t] most streaming audio and video on the web comes through flash players and not html5 based players?" The answer is rather pedestrian: HTML5 is just ramping up, but Flash IDE has been around for many years. Selling Flash IDE and back-end server tools has been a commercial focus for Adobe, while Apple, for example, hasn't paid much attention to QuickTime technologies and promotion in ages. It's thus reflected in adoption patterns. Hopefully, this summary will clear Wilson's blind spot: Apple is betting on open technologies (as it makes money on hardware) while Adobe (which only sells software) is betting on wrapping up content in a proprietary shackle called Flash.
Gary Edwards

When You're a WebKit Hammer, Everything Looks Like an Open Web Nail ... As it should! - 0 views

  • You’re still waiting for me to explain what I meant when I referred to JavaScript as a last resort. I hinted at it in the preceding paragraph. Not the part on JavaScript debugging, but my reference to CSS and HTML. These do a lot more than paint screens. They are a browser's client-side framework. Everything they do is handled as native code. In other words, they're fast. CSS3 and HTML5 are too inconsistently implemented (if at all) across browsers to design to unless you're specifically targeting Safari, iPhone, or other WebKit-based browsers.
    • Gary Edwards
       
      Tom makes the point that the use of AJAX JavaScript breaks Web interoperability. He further points out that HTML is a static layout language, where CSS is dynamic and adaptive. (Use HTML5/DOM for document structure, and CSS4 for presentation - layout, formatting and visual interface).

      It is the consistency of the WebKit document model across all WebKit browsers that makes for an interoperable Open Web future. I would not however discount the importance of Firefox and Opera embracing the WebKit document model (HTML5, CSS4, SVG/Canvas, JavaScript, DOM2). That's our guarantee that the future of the Open Web will actually be open.

      Tom goes on to suggest that instead of "AJAX", developers would be better off thinking in terms of "ACHJAX": Asynchronous CSS4 - HTML5 - JavaScript and XML ..... with the focus on getting as much done in CSS4 as possible.
  •  
    InfoWorld's Tom Yager makes the case for the WebKit visual document model over AJAX. The problem with AJAX as he sees it is that it's JavaScript heavy. And that breaks precious Web interoperability. He makes the point that if something can be done in CSS, it should. He also argues that WebKit is the best tool because the document model is that of advanced HTML5 and CSS3.

    "... These [WebKit] browsers also share a stellar accelerated JavaScript interpreter that makes the edit/run/debug cycle go faster. They are also the only browsers that deliver on CSS4 and HTML5 standards (with some elements that are proposed to the W3C standards body). Sites that are visually rich may start sprouting "best viewed with Safari" banners until other browsers catch up. The banner would also let users know that your site is optimized for iPhone....."

    Humm. Did you catch that? CSS4!!! I guess he's referring to the WebKit penchant for putting advanced graphical transitions and animations into CSS instead of relying on a device specific or OS specific API.

    Placing the visual interface instructions in the documents presentation layer (CSS4) is a revolutionary idea. The WebKit model will go a long way towards creating a global interoperability layer that rides above lower device, OS, browser and application specifics. So yes, by all means let's go with CSS4 :)

Gary Edwards

Breaking the Web: The Document War between HTML+ and OOXML - 0 views

  •  
    Microsoft to the world: Outlook's not broken and we aren't 'fixing' it! Mary Jo has an interesting article over at ZDNet. She points out that Microsoft is refusing to restore support for HTML editing in Outlook. Instead, Microsoft intends on using the MSWord editor. I think that means a Microsoft desktop future based on Office OpenXML (OOXML). We shall see. But if this is the case, then i also think we are looking at how Microsoft will break the Web. I've left an extensive comment to Mary Jo's article in the Talkback section, linked to above. ".... This is for all the marbles. The future of the Open Web is at stake. If Microsoft is successful at carving out and encoding an MS Web based on a document format specific to their platforms, applications and services, the Web will break. "
    "Looks like a plan to me."
    continued here
Gonzalo San Gil, PhD.

Midori in Launchpad - 0 views

  •  
    [ # Join #midori on irc.freenode.net for discussions about bugs and development. Project statistics: https://www.ohloh.net/p/midori # Midori is a fast and lightweight web browser that uses the WebKit rendering engine and the GTK+ interface. Midori is a fast little WebKit browser with support for HTML5. It can manage many open tabs and windows. The URL bar completes history, bookmarks, search engines and open tabs out of the box. Web developers can use the powerful web inspector that is a part of WebKit. Individual pages can easily be turned into web apps and new profiles can be created on demand. A number of extensions are included by default: * Adblock with support for ABP filter lists and custom rules is built-in. * You can download files with Aria2 or SteadyFlow. * User scripts and styles support a la Greasemonkey. * Managing cookies and scripts via NoJS and Cookie Security Manager. * Switching open tabs in a vertical panel or a popup window.]
Gary Edwards

Meteor: The NeXT Web - 0 views

  •  
    "Writing software is too hard and it takes too long. It's time for a new way to write software - especially application software, the user-facing software we use every day to talk to people and keep track of things. This new way should be radically simple. It should make it possible to build a prototype in a day or two, and a real production app in a few weeks. It should make everyday things easy, even when those everyday things involve hundreds of servers, millions of users, and integration with dozens of other systems. It should be built on collaboration, specialization, and division of labor, and it should be accessible to the maximum number of people. Today, there's a chance to create this new way - to build a new platform for cloud applications that will become as ubiquitous as previous platforms such as Unix, HTTP, and the relational database. It is not a small project. There are many big problems to tackle, such as: How do we transition the web from a "dumb terminal" model that is based on serving HTML, to a client/server model that is based on exchanging data? How do we design software to run in a radically distributed environment, where even everyday database apps are spread over multiple data centers and hundreds of intelligent client devices, and must integrate with other software at dozens of other organizations? How do we prepare for a world where most web APIs will be push-based (realtime), rather than polling-driven? In the face of escalating complexity, how can we simplify software engineering so that more people can do it? How will software developers collaborate and share components in this new world? Meteor is our audacious attempt to solve all of these big problems, at least for a certain large class of everyday applications. We think that success will come from hard work, respect for history and "classically beautiful" engineering patterns, and a philosophy of generally open and collaborative development. " .............. "It is not a
  •  
    "How do we transition the web from a "dumb terminal" model that is based on serving HTML, to a client/server model that is based on exchanging data?" From a litigation aspect, the best bet I know of is antitrust litigation against the W3C and the WHATWG Working Group for implementing a non-interoperable specification. See e.g., Commission v. Microsoft, No. T-167/08, European Community Court of First Instance (Grand Chamber Judgment of 17 September, 2007), para. 230, 374, 421, http://preview.tinyurl.com/chsdb4w (rejecting Microsoft's argument that "interoperability" has a 1-way rather than 2-way meaning; information technology specifications must be disclosed with sufficient specificity to place competitors on an "equal footing" in regard to interoperability; "the 12th recital to Directive 91/250 defines interoperability as 'the ability to exchange information and mutually to use the information which has been exchanged'"). Note that the Microsoft case was prosecuted on the E.U.'s "abuse of market power" law that corresponds to the U.S. Sherman Act § 2 (monopolies). But undoubtedly the E.U. courts would apply the same standard to "agreements among undertakings" in restraint of trade, counterpart to the Sherman Act's § 1 (conspiracies in restraint of trade), the branch that applies to development of voluntary standards by competitors. But better to innovate and obsolete HTML, I think. DG Competition and the DoJ won't prosecute such cases soon. For example, Obama ran for office promising to "reinvigorate antitrust enforcement" but his DoJ has yet to file its first antitrust case against a big company. Nb., virtually the same definition of interoperability announced by the Court of First Instance is provided by ISO/IEC JTC-1 Directives, annex I ("eye"), which is applicable to all international standards in the IT sector: "... interoperability is understood to be the ability of two or more IT systems to exchange information at one or more standardised interfaces
Gonzalo San Gil, PhD.

How Big Is Your Target? - Freedom Penguin - 0 views

  •  
    "April 20, 2016 Jacob Roecker 0 Comment Opinion In his 2014 TED presentation Cory Doctorow compares an open system of development to the scientific method and credits the methods for bringing mankind out of the dark ages. Tim Berners-Lee has a very credible claim to patent the technology that runs the internet, but instead has championed for its open development. This open development has launched us forward into a brave new world. Nearly one third of all internet traffic rides on just one openly developed project. "
  •  
    "April 20, 2016 Jacob Roecker 0 Comment Opinion In his 2014 TED presentation Cory Doctorow compares an open system of development to the scientific method and credits the methods for bringing mankind out of the dark ages. Tim Berners-Lee has a very credible claim to patent the technology that runs the internet, but instead has championed for its open development. This open development has launched us forward into a brave new world. Nearly one third of all internet traffic rides on just one openly developed project. "
1 - 20 of 86 Next › Last »
Showing 20 items per page