Skip to main content

Home/ Future of the Web/ Group items tagged quality

Rss Feed Group items tagged

Gary Edwards

Good News for Ajax and the Open Web - The Browser Wars Are Back - 0 views

  • For much of this decade, Web browsing has been dominated by Microsoft's Internet Explorer (IE), which at its height achieved market share numbers approaching 95%, with the result that Microsoft owned a de facto standard for the Web and held effective veto power over the future of HTML. During much of this period, Microsoft suspended development of IE, with the result that virtually no new features appeared within the world's dominant browser from 2001 to 2006. But while IE was sleeping, one of the biggest phenomena of the computer age happened: Ajax. Clever Web developers discovered gold in them there mountains. Using Ajax techniques, Web developers could create desktop-like rich user interfaces right in the browser. Not only that, Ajax was evolutionary. Ajax offered an incremental path from the industry's existing HTML-based infrastructure and know-how, allowing Web developers to add rich Ajax elements to an existing HTML page.
  • A companion community effort helping to accelerate the adoption of open standards is the Web Standards Project (http://www.webstandards.org), which is producing a set of "acid tests" that verify browser support for Open Web technologies, such as HTML, CSS and JavaScript. Acid2 is focused mainly on CSS support, and is now supported by Opera, Safari/WebKit, and IE. Acid3 (http://www.webstandards.org/action/acid3) tests DOM scripting, CSS rendering,
    • Gary Edwards
       
      The amazing thing about Ajax and the Open Web is the way WHATWG, WebKit, and the Web Standards "ACID" work has accelerated Open Web Standards, pushing far beyond the work of the glacial W3C.
  • Runtime Advocacy Task Force
  •  
    Lengthy artilce from the OpenAjax Alliance summarizing HTML, Ajax and the future of the Open Web. Very well referenced. Lots of whitepapers and links
  •  
    good summarization of the Open Web future.
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Paul Merrell

Office Business Applications for Store Operations - 0 views

  • Service orientation addresses these challenges by centering on rapidly evolving XML and Web services standards that are revolutionizing how developers compose systems and integrate them over distributed networks. No longer are developers forced to make do with rigid and proprietary languages and object models that used to be the norm before service orientation came into play. The emergence of this new methodology is helping to develop new approaches specifically for Web-based distributed computing. This revolution is transforming the business by integrating disparate systems to establish a real-time enterprise. Making information available where it is needed to simplify merchandising processes requires a methodology that is based on loosely coupled integration between various in-store and back-end applications. This demand makes it critical for an architecture that is based on service orientation for integration between disparate applications. In addition, surfacing information at the right place requires the ability to compose dynamic applications using an array of underlying services. The Office Business Applications platform provides this ability to create composite applications, such as dashboards for the store, regional, and corporate managers.
  •  
    Summary: Changing market conditions require agility in business applications. Service orientation answers the challenge by centering on XML and Web services standards that revolutionize how developers compose systems and integrate them over distributed networks. Once integrated, how is the information presented to the decision makers? (36 printed pages)
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Paul Merrell

WG Review: Internet Wideband Audio Codec (codec) - 0 views

  • According to reports from developers of Internet audio applications and operators of Internet audio services, there are no standardized, high-quality audio codecs that meet all of the following three conditions: 1. Are optimized for use in interactive Internet applications. 2. Are published by a recognized standards development organization (SDO) and therefore subject to clear change control. 3. Can be widely implemented and easily distributed among application developers, service operators, and end users. There exist codecs that provide high quality encoding of audio information, but that are not optimized for the actual conditions of the Internet; according to reports, this mismatch between design and deployment has hindered adoption of such codecs in interactive Internet applications.
  • The goal of this working group is to develop a single high-quality audio codec that is optimized for use over the Internet and that can be widely implemented and easily distributed among application developers, service operators, and end users. Core technical considerations include, but are not necessarily limited to, the following: 1. Designing for use in interactive applications (examples include, but are not limited to, point-to-point voice calls, multi-party voice conferencing, telepresence, teleoperation, in-game voice chat, and live music performance) 2. Addressing the real transport conditions of the Internet as identified and prioritized by the working group 3. Ensuring interoperability with the Real-time Transport Protocol (RTP), including secure transport via SRTP 4. Ensuring interoperability with Internet signaling technologies such as Session Initiation Protocol (SIP), Session Description Protocol (SDP), and Extensible Messaging and Presence Protocol (XMPP); however, the result should not depend on the details of any particular signaling technology
Paul Merrell

Firefox, YouTube and WebM ✩ Mozilla Hacks - the Web developer blog - 1 views

  • 1. Google will be releasing VP8 under an open source and royalty-free basis. VP8 is a high-quality video codec that Google acquired when they purchased the company On2. The VP8 codec represents a vast improvement in quality-per-bit over Theora and is comparable in quality to H.264. 2. The VP8 codec will be combined with the Vorbis audio codec and a subset of the Matroska container format to build a new standard for Open Video on the web called WebM. You can find out more about the project at its new site: http://www.webmproject.org/. 3. We will include support for WebM in Firefox. You can get super-early WebM builds of Firefox 4 pre-alpha today. WebM will also be included in Google Chrome and Opera. 4. Every video on YouTube will be transcoded into WebM. They have about 1.2 million videos available today and will be working through their back catalog over time. But they have committed to supporting everything. 5. This is something that is supported by many partners, not just Google and others. Content providers like Brightcove have signed up to support WebM as part of a full HTML5 video solution. Hardware companies, encoding providers and other parts of the video stack are all part of the list of companies backing WebM. Even Adobe will be supporting WebM in Flash. Firefox, with its market share and principled leadership and YouTube, with its video reach are the most important partners in this solution, but we are only a small part of the larger ecosystem of video.
Gonzalo San Gil, PhD.

Pirates Fail to Prevent American Sniper's Box Office Record | TorrentFreak - 0 views

  •  
    " Ernesto on January 19, 2015 C: 0 Opinion American Sniper grossed record numbers at the U.S. box office this weekend. A surprising result, not least because a high quality leak was widely available prior to the premiere, resulting in millions of pirate downloads and streams. How can this be?" # ! sharing boosts sales.
  •  
    " Ernesto on January 19, 2015 C: 0 Opinion American Sniper grossed record numbers at the U.S. box office this weekend. A surprising result, not least because a high quality leak was widely available prior to the premiere, resulting in millions of pirate downloads and streams. How can this be?" # ! sharing boosts sales. [Welcome, dear customers: You Are Stupid.]
Alexandra IcecreamApps

Best Free Audio Converters for Windows - Icecream Tech Digest - 0 views

  •  
    With great power comes great responsibility and with fast development of media technologies comes a wide variety of media formats. Each audio format has its own significant features: some maintain excellent quality, some offer compact size, some can be played … Continue reading →
  •  
    With great power comes great responsibility and with fast development of media technologies comes a wide variety of media formats. Each audio format has its own significant features: some maintain excellent quality, some offer compact size, some can be played … Continue reading →
Gonzalo San Gil, PhD.

Everything we know about Tidal, the artist-owned music streaming service - 0 views

  •  
    "1. What is Tidal? Tidal is a music subscription service for audio and video files. The focus is on sound quality; Tidal also promises exclusive songs and videos from artists."
Alexandra IcecreamApps

All You Need to Know about WebM Format - Icecream Tech Digest - 0 views

  •  
    WebM format was first introduced by Google in 2010. Since this video format is based on the Matroska container, it manages to support great video quality. As for the audio streams, it supports Vorbis audio. WebM format is initially designed … Continue reading →
  •  
    WebM format was first introduced by Google in 2010. Since this video format is based on the Matroska container, it manages to support great video quality. As for the audio streams, it supports Vorbis audio. WebM format is initially designed … Continue reading →
Paul Merrell

Banning end-to-end encryption being considered by Trump team- 9to5Mac - 0 views

  • The Trump administration is considering the possibility of banning end-to-end encryption, as used by services like Apple’s Messages and FaceTime, as well as competing platforms like WhatsApp and Signal. The topic was reportedly the main topic of a previously-unreported meeting of a National Security Council meeting on Wednesday … NordVPN Politico cites three sources for the story. Senior Trump administration officials met on Wednesday to discuss whether to seek legislation prohibiting tech companies from using forms of encryption that law enforcement can’t break — a provocative step that would reopen a long-running feud between federal authorities and Silicon Valley. The encryption challenge, which the government calls “going dark,” was the focus of a National Security Council meeting Wednesday morning that included the No. 2 officials from several key agencies, according to three people familiar with the matter. The meeting reportedly discussed two options. Senior officials debated whether to ask Congress to effectively outlaw end-to-end encryption, which scrambles data so that only its sender and recipient can read it […] “The two paths were to either put out a statement or a general position on encryption, and [say] that they would continue to work on a solution, or to ask Congress for legislation,” said one of the people. No decision was reached given strongly opposing views within the government.
Paul Merrell

Ohio's attorney general wants Google to be declared a public utility. - The New York Times - 2 views

  • Ohio’s attorney general, Dave Yost, filed a lawsuit on Tuesday in pursuit of a novel effort to have Google declared a public utility and subject to government regulation.The lawsuit, which was filed in a Delaware County, Ohio court, seeks to use a law that’s over a century old to regulate Google by applying a legal designation historically used for railroads, electricity and the telephone to the search engine.“When you own the railroad or the electric company or the cellphone tower, you have to treat everyone the same and give everybody access,” Mr. Yost, a Republican, said in a statement. He added that Ohio was the first state to bring such a lawsuit against Google.If Google were declared a so-called common carrier like a utility company, it would prevent the company from prioritizing its own products, services and websites in search results.AdvertisementContinue reading the main storyGoogle said it had none of the attributes of a common carrier that usually provide a standardized service for a fee using public assets, such as rights of way.The “lawsuit would make Google Search results worse and make it harder for small businesses to connect directly with customers,” José Castañeda, a Google spokesman, said in a statement. “Ohioans simply don’t want the government to run Google like a gas or electric company. This lawsuit has no basis in fact or law and we’ll defend ourselves against it in court.”Though the Ohio lawsuit is a stretch, there is a long history of government control of certain kinds of companies, said Andrew Schwartzman, a senior fellow at the nonprofit Benton Institute for Broadband & Society. “Think of ‘The Canterbury Tales.’ Travelers needed a place to stay and eat on long road treks, and innkeepers were not allowed to deny them accommodations or rip them off,” he said.
  • After a series of federal lawsuits filed against Google last year, Ohio’s lawsuit is part of a next wave of state actions aimed at regulating and curtailing the power of Big Tech. Also on Tuesday, Colorado’s legislature passed a data privacy law that would allow consumers to opt out of data collection.On Monday, New York’s Senate passed antitrust legislation that would make it easier for plaintiffs to sue dominant platforms for abuse of power. After years of inaction in Congress with tech legislation, states are beginning to fill the regulatory vacuum.Editors’ PicksThe Abandoned Houses of Instagram21 Easy Summer Dinners You’ll Cook (or Throw Together) on Repeat‘King Richard’ Finds Fresh Drama in WatergateAdvertisementContinue reading the main storyAdvertisementContinue reading the main storyOhio was also one of 38 states that filed an antitrust lawsuit in December accusing Google of being a monopoly and using its dominant position in internet search to squeeze out smaller rivals.
Gonzalo San Gil, PhD.

Directory of Open Access Journals - 1 views

  •  
    Free, full text, quality controlled scientific and scholarly journals, covering all subjects and many languages
Gonzalo San Gil, PhD.

Net Neutrality: EU Parliament Must Amend Kroes' Dangerous Proposal | La Quadrature du Net - 1 views

  •  
    "Paris, 5 December 2013 - On Monday 9th December, the rapporteur Pilar del Castillo Vera (EPP - Spain) will present to the "Industry" (ITRE) Committee of the European Parliament her draft report on Neelie Kroes' proposal for a Regulation on the Telecom Package. Citizens must urge MEPs to amend this report in order to accurately define what qualifies as 'specialised services' with 'enhanced' quality of service, and ensure that the Regulation will guarantee a genuine and unconditional Net neutrality principle."
Gonzalo San Gil, PhD.

Outernet | Information for the World from Outer Space - 1 views

  •  
    "Information for the World from Outer Space Unrestricted, globally accessible, broadcast data. Quality content from all over the Internet. Available to all of humanity. For free."
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

Five Magnificent Linux Music Streaming Clients - Linux Links - The Linux Portal Site - 1 views

  •  
    "Almost all music streaming services fall short of the audio quality of CDs, but their popularity has more to do with sheer convenience than high-fidelity sound reproduction. "
Paul Merrell

W3C Helps Authors Go Mobile - 0 views

  • http://www.w3.org/ -- 8 December 2008 -- Today, W3C has made it easier to create content designed to improve people's mobile experience using a broad range of devices. W3C invites the community to try the W3C mobileOK checker, which is based on the newly published standard, the mobileOK Basic Tests 1.0 Recommendation. "The new checker builds on the suite of quality assurance tools offered by W3C to help authors and authoring tool developers create clean content," said Tim Berners-Lee, W3C Director. "Clean content offers a number of benefits to authors and users alike. The mobileOK checker does a nice job helping you improve your content one step at a time. Your mobile audience will thank you each time you improve your score."
  • The mobileOK Basic tests are based on the part of the Mobile Web Best Practices that can be verified automatically with software. The checker makes use of the popular W3C validator to help improve content quality. In addition to the mobile-friendliness score, the checker offers tips for meeting the needs of people on the go.
Gonzalo San Gil, PhD.

OECD Work on Digital Content - 0 views

  •  
    OECD Working Party on the Information Economy (www.oecd.org/sti/digitalcontent) Work Plan on Digital Broadband Content OECD Recommendation on Public Sector Information OECD Policy Guidance for Digital Content The OECD's Working Party on the Information Economy (WPIE) is undertaking analysis of the digital delivery of content. This work recognises that the rapid development of high-quality "always on" broadband Internet services is transforming high-growth industries that provide or have the potential to provide digital content. Specifically, this work includes stocktaking studies in the following areas: scientific publishing, music, on-line computer games, mobile content, user-created content, digital content and the evolution of the film and video industries and public sector information and content.
Paul Merrell

Internet Archive: Scanning Services - 1 views

  • Digitizing Print Collections with the Internet Archive Open and free online access, permanent storage, unlimited downloads and lifetime file management. We can help digitize your collections in 4 simple steps:
  • In addition to permanent hosting on archive.org, your books will be integrated with Open Library, openlibrary.org, a page on the web for every book.
  • Non-destructive color scanning using our Scribe system at one of our scanning centers across the globe. Complete MARC records, Dublin Core & XML, just 10c USD per image and a small set up charge per item.
  • ...4 more annotations...
  • Create and upload high-quality JP2000 images; persistent identifiers, lifetime hosting of files, lifetime management of file system and file access.
  • Create high quality PDF A files; run OCR across texts to allow "search inside" all books. Add to Internet Archive search engine; display via our open source Book Reader
  • 2,000,000 books online 600 million pages scanned 1,500 book scanned each day 15 million downloads each month 33 scanning centers in 7 countries 3.5 petabytes of storage 8 Gb per second bandwidth
  • Library of Congress Harvard University The New York Public Library Smithsonian Institution The Getty Research Institute University of California University of Toronto Biodiversity Heritage Library Boston Library Consortium C.A.R.L.I. Johns Hopkins University Allen County Public Library Lyrasis Massachusetts Institute of technology State Library of Massachusetts . . . and over 1,000 other Open Content Alliance partners
  •  
    I've been looking for a permanent online home for a couple of historical works I co-authored. My guidiing criterion has been the best chance of the works' long-term survival in a publicly-accessible form after my death. I think I may have just found my solution. 
Gary Edwards

Effective Web Typography: Rules, Techniques and Responsive Design - Designmodo - 0 views

  •  
    "Sam Norton  *  Coding, Design  *  February 16, 2015  *  8 Comments Responsive Web Design isn't just about columns, grids, images and icons. All of this will not make sense without text for content. As Bill Gates once said "Content is King." When it comes to content, we need to talk about web typography. Looking at modern web design trends, having responsive typography is a big factor every web designer and web developer shouldn't miss. Here, we will discuss creating responsive web typography and factors you need to know about it. Typography Basics Good typography is all about selecting the right type for web or printed media. From font type, color of the text to the length and font-size on different viewports, good typography ensures that the final letter forms generate the highest quality end result. Before we dive in to the process of creating successful responsive web typography, here are a few terms that you need to understand. "
Gonzalo San Gil, PhD.

Introducing EFF's Stupid Patent of the Month | Electronic Frontier Foundation - 0 views

  •  
    "Here at EFF, we see a lot of stupid patents. There was the patent on "scan to email." And the patent on "bilateral and multilateral decision making." There are so many stupid patents that Mark Cuban endowed a chair at EFF dedicated to eliminating them. We wish we could catalog them all, but with tens of thousands of low-quality software patents issuing every year, we don't have the time or resources to undertake that task."
« First ‹ Previous 41 - 60 of 109 Next › Last »
Showing 20 items per page