Skip to main content

Home/ Future of the Web/ Group items tagged ads

Rss Feed Group items tagged

Paul Merrell

Understanding Lotus Connections, IBM's Version of Web 2.0 For The Enterprise - CIO.com ... - 0 views

  • As innovation in the consumer space spills over into the enterprise, IBM believes its social software suite that includes blogs and social networks for business will give users the collaboration features they want while giving IT the ability to hook it into existing systems.
  • Lotus Connections represents IBM's response to a Web 2.0 world.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Paul Merrell

Understanding Microsoft SharePoint in a Web 2.0 World - CIO.com - Business Technology L... - 0 views

  • While analysts say the social software in SharePoint lacks the functionality and usability of competitor products, its tight integration with existing Microsoft systems such as Exchange and Office makes it an attractive buy for IT departments looking to capitalize on the Web 2.0 movement while still utilizing the technology tools they already have inside their companies.
  • For its part, Microsoft boasts a staggering rate of adoption for SharePoint. According to Rob Curry, director of SharePoint, Microsoft has sold around 100 million licenses of the product
  •  
    Good article about Sharepoint, although it does not capture the breadth of the Microsoft assault on the Web.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Paul Merrell

HTML 5 Draft Recommendation - 0 views

  • Draft Recommendation — 29 May 2008
  • Abstract This specification evolves HTML and its related APIs to ease the authoring of Web-based applications. Additions include the context menus, a direct-mode graphics canvas, inline popup windows, and server-sent events. Heavy emphasis is placed on keeping the language backwards compatible with existing legacy user agents and on keeping user agents backwards compatible with existing legacy documents.
    • Paul Merrell
       
      HTML 5 may "ease the authoring of Web-based applications," but has nothing to offer web app developers or users in the way of the interoperable interchange of web app page or sub-page content.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Paul Merrell

The New York Times Archives + Amazon Web Services = TimesMachine - Open - Code - New Yo... - 0 views

  • TimesMachine is a collection of full-page image scans of the newspaper from 1851–1922 (i.e., the public domain archives). Organized chronologically and navigated by a simple calendar interface, TimesMachine provides a unique way to traverse the historical archives of The New York Times.
  • Using Amazon Web Services, Hadoop and our own code, we ingested 405,000 very large TIFF images, 3.3 million articles in SGML and 405,000 xml files mapping articles to rectangular regions in the TIFF’s. This data was converted to a more web-friendly 810,000 PNG images (thumbnails and full images) and 405,000 JavaScript files — all of it ready to be assembled into a TimesMachine. By leveraging the power of AWS and Hadoop, we were able to utilize hundreds of machines concurrently and process all the data in less than 36 hours.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Paul Merrell

Publicly Available Standards - 0 views

    • Paul Merrell
       
      This is the download page for ISO/IEC information technology standards available at no charge. The same standards are available on other ISO, IEC, and other standards organizations' web pages for a fee. If you need an ISO/IEC information technology standard, check here before you pay money for what's also given away for free. Notice that standards are arranged on the page in numerical order.
  •  
    Most ISO and IEC standards are only available for purchase. However, a few are publicly available at no charge. ISO/IEC:26300-2006 is one of the latter and can be downloaded from this page in XHTML format. Note that the standards listed on the page are arranged numerically and the OpenDocument standard is very near the bottom of the page. This version of ODF is the only version that has the legal status of an international standard, making it eligible as a government procurement specification throughout all Member nations of the Agreement on Government Procurement.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Paul Merrell

Digital Web Magazine - HTML5, XHTML2, and the Future of the Web - 0 views

  •  
    The browser-centric view of why HTML5 is better than XHTML2. Notice that the entire discussion does not address the need for interoperable data exchange between different web applications, let alone for their interaction with more traditional desktop or mobile device editors. HTML5 is enormously under-specified for data exchange among anything but web browsers. As only one small example, neither HTML5 nor CSS Selectors have a specified standard element for footnotes and footnote calls, let alone attributes for their numbering style, formatting, and location. And even if CSS Selectors included such elements and attributes, CSS lives in web site page templates, not in the web app editors for site content that use HTML forms. Easy pickings for Microsoft and its proprietary stack that does interoperably integrate the desktop, servers, devices, and the Web.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Paul Merrell

Facebook plans to build huge Utah data center | Fox Business - 0 views

  • acebook plans to build a data center in Utah on a 490-acre site, the social media company and government officials announced on Wednesday.Continue Reading BelowThe Eagle Mountain, Utah, facility will be 970,000 square feet and located about 15 miles south of a National Security Agency data center in Bluffdale, Utah.
  •  
    15 miles from the NSA's Bluffdale facility. An amazing coincidence or something more sinister?
Paul Merrell

Keller Lenkner & Quinn Emanuel File Antitrust Class-Action Lawsuit Against Facebook - 1 views

  • National plaintiffs’ law firm Keller Lenkner LLC and global business litigation firm Quinn Emanuel Urquhart & Sullivan, LLP filed a class-action lawsuit against Facebook, Inc. alleging violations of federal antitrust laws and California law on behalf of Facebook users.ADVERTISEMENTFiled in the U.S. District Court for the Northern District of California, the complaint alleges that Facebook obtained and maintained a social network and social media monopoly by consistently deceiving consumers about the data-privacy protections it provided to users, and by exploiting the data it extracted from users to target smaller startup companies for destruction or acquisition.The lawsuit seeks to put an end to Facebook’s misrepresentations about its privacy practices and its anticompetitive acquisition conduct; to require Facebook to engage in third-party auditing of its conduct; and to require Facebook to divest assets, such as Instagram and WhatsApp, that entrench its market power.
  • According to the complaint, which was filed on behalf of named plaintiffs Sarah Grabert and Maximilian Klein, Facebook did not achieve its Big Tech monopoly through innovation or vigorous competition. Despite its public pledge to protect user privacy, Facebook lied to users and violated their trust in a scheme to build a technology empire. Facebook also acquired technology from smaller firms that it used to track consumer activity across the internet so that it could identify and target competitors.ADVERTISEMENTThe complaint further alleges that in a strategic, intentional ploy for market domination, Facebook engaged in its scheme to destroy all competition without a care for the ultimate harm it would inflict on consumers. By the time Facebook’s deception about its lackluster privacy protections became public knowledge, Facebook had already achieved dominance, making it difficult for any firm to challenge its social media and social network monopoly.
  • The complaint notes that Facebook derives enormous economic value from the data it harvests from consumers on its platform. In fact, Facebook itself has described how it generates massive earnings per user from the data it collects. The complaint details how Facebook’s destruction of competition has caused consumers substantial economic injury. Consumers who sign up for Facebook agree to give up their valuable data and attention in exchange for using Facebook’s platform. That information and attention is then sold in measurable units to advertisers in exchange for money. The complaint alleges that consumers were harmed by Facebook’s anticompetitive conduct, as they did not receive the benefit of their bargain with Facebook.The lawsuit includes claims for violations of federal antitrust laws and California common law. It also seeks an order enjoining Facebook from continuing to engage in the alleged wrongful acts, requiring Facebook to engage third-party auditors to evaluate and correct problems with Facebook’s conduct, and requiring Facebook to divest assets like Instagram and WhatsApp. The lawsuit also seeks monetary damages, restitution and/or disgorgement of Facebook’s wrongful gains, attorneys’ fees, and costs.
Gonzalo San Gil, PhD.

Michael Robertson: Helping Publishers Avoid the Google Destroyer - 0 views

  •  
    "If you're an online news publisher then Google is not your friend. Neither is Facebook. (No matter how fast articles load.) They want to suck away all content and keep all the advertising money. Google has already achieved much of this and Facebook is not far behind. With the loss of advertisers, many publishers have been lured into slimy clickbait ads, which slow the website and damage the user experience. T"
  •  
    "If you're an online news publisher then Google is not your friend. Neither is Facebook. (No matter how fast articles load.) They want to suck away all content and keep all the advertising money. Google has already achieved much of this and Facebook is not far behind. With the loss of advertisers, many publishers have been lured into slimy clickbait ads, which slow the website and damage the user experience. T"
Gonzalo San Gil, PhD.

Google's My Activity reveals just how much it knows about you | Technology | The Guardian - 0 views

    • Gonzalo San Gil, PhD.
       
      # ! #spying tools #disguised #as leading Electronic #Services...
  •  
    "Search company launches new opt-in ad service for non-Google sites and tools that show how it tracks your internet activity "
  •  
    "Search company launches new opt-in ad service for non-Google sites and tools that show how it tracks your internet activity "
Gary Edwards

Readium at the London Book Fair 2014: Open Source for an Open Publishing Ecosystem: Rea... - 0 views

  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-Android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-Android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

MPAA Targets New Anti-Piracy Ads... At People Who Already Paid To Go See Movies | Techdirt - 0 views

  •  
    "from the strategery! dept There's that old joke that you've probably heard (in part because we've mentioned it in other contexts), about the drunk man searching for his keys under a streetlight, while admitting that he lost them further down the street. When asked why he's looking over by the light instead, he says "because that's where the light is." People even refer to this as the streetlight effect. And you can see it in all sorts of odd places."
Gonzalo San Gil, PhD.

Google Is Taking a Big Step to Kill Off Flash for Good | WIRED - 1 views

  •  
    "Starting tomorrow, Google's Chrome browser will automatically pause web ads that use Flash. "
Paul Merrell

Senate majority whip: Cyber bill will have to wait until fall | TheHill - 0 views

  • Senate Majority Whip John Cornyn (R-Texas) on Tuesday said the upper chamber is unlikely to move on a stalled cybersecurity bill before the August recess.Senate Republican leaders, including Cornyn, had been angling to get the bill — known as the Cybersecurity Information Sharing Act (CISA) — to the floor this month.ADVERTISEMENTBut Cornyn said that there is simply too much of a time crunch in the remaining legislative days to get to the measure, intended to boost the public-private exchange of data on hackers.  “I’m sad to say I don’t think that’s going to happen,” he told reporters off the Senate floor. “The timing of this is unfortunate.”“I think we’re just running out time,” he added.An aide for Senate Majority Leader Mitch McConnell (R-Ky.) said he had not committed to a specific schedule after the upper chamber wraps up work in the coming days on a highway funding bill.Cornyn said Senate leadership will look to move on the bill sometime after the legislature returns in September from its month-long break.
  • The move would delay yet again what’s expected to be a bruising floor fight about government surveillance and digital privacy rights.“[CISA] needs a lot of work,” Sen. Patrick Leahy (D-Vt.), who currently opposes the bill, told The Hill on Tuesday. “And when it comes up, there’s going to have to be a lot of amendments otherwise it won’t pass.”Despite industry support, broad bipartisan backing, and potentially even White House support, CISA has been mired in the Senate for months over privacy concerns.Civil liberties advocates worry the bill would create another venue for the government’s intelligence wing to collect sensitive data on Americans only months after Congress voted to rein in surveillance powers.But industry groups and many lawmakers insist a bolstered data exchange is necessary to better understand and counter the growing cyber threat. Inaction will leave government and commercial networks exposed to increasingly dangerous hackers, they say.Sen. Ron Wyden (D-Ore.), who has been leading the chorus opposing the bill, rejoiced Tuesday after hearing of the likely delay.
  • “I really want to commend the advocates for the tremendous grassroots effort to highlight the fact that this bill was badly flawed from a privacy standpoint,” he told The Hill.Digital rights and privacy groups are blanketing senators’ offices this week with faxes and letters in an attempt to raise awareness of bill’s flaws.“Our side has picked up an enormous amount of support,” Wyden said.Wyden was the only senator to vote against CISA in the Senate Intelligence Committee. The panel approved the measure in March by a 14-1 vote and it looked like CISA was barrelling toward the Senate floor.After the House easily passed its companion pieces of legislation, CISA’s odds only seemed better.But the measure got tied up in the vicious debate over the National Security Agency's (NSA) spying powers that played out throughout April and May.“It’s like a number of these issues, in the committee the vote was 14-1, everyone says, ‘oh, Ron Wyden opposes another bipartisan bill,’” Wyden said Tuesday. “And I said, ‘People are going to see that this is a badly flawed bill.’”
  • ...2 more annotations...
  • CISA backers hoped that the ultimate vote to curb the NSA’s surveillance authority might quell some of the privacy fears surrounding CISA, clearing a path to passage. But numerous budget debates and the Iranian nuclear deal have chewed up much of the Senate’s floor time throughout June and July.  Following the devastating hacks at the Office of Personnel Management (OPM), Senate Republican leaders tried to jump CISA in the congressional queue by offering its language as an amendment to a defense authorization bill.Democrats — including the bill’s original co-sponsor Sen. Dianne Feinstein (D-Calif.) — revolted, angry they could not offer amendments to CISA’s language before it was attached to the defense bill.Cornyn on Tuesday chastised Democrats for stalling a bill that many of them favor.“As you know, Senate Democrats blocked that before on the defense authorization bill,” Cornyn said. “So we had an opportunity to do it then.”Now it’s unclear when the Senate will have another opportunity.When it does, however, CISA could have the votes to get through.
  • There will be vocal opposition from senators like Wyden and Leahy, and potentially from anti-surveillance advocates like Sens. Rand Paul (R-Ky.), Mike Lee (R-Utah) and Dean Heller (R-Nev.).But finding 40 votes to block the bill completely will be a difficult task.Wyden said he wouldn’t “get into speculation” about whether he could gather the support to stop CISA altogether.“I’m pleased about the progress that we’ve made,” he said.
  •  
    NSA and crew decide to delay and try later with CISA. The Internet strikes back again.
Gonzalo San Gil, PhD.

This Is WAR: Spotify Tells Subscribers Not to Pay Apple's 30% Cut... - Digital Music Ne... - 0 views

  •  
    "Want to enjoy Spotify on your iPhone, a platform that Apple built? Then you have to go through the App Store, where Apple takes 30 percent of the monthly subscription price. That is, unless you go around Apple and its terms of service regarding subscriptions. Here's an email that Spotify just sent to subscribers, telling them how to circumvent that and the extra charge Spotify added on to pay the 30% cut."
  •  
    "Want to enjoy Spotify on your iPhone, a platform that Apple built? Then you have to go through the App Store, where Apple takes 30 percent of the monthly subscription price. That is, unless you go around Apple and its terms of service regarding subscriptions. Here's an email that Spotify just sent to subscribers, telling them how to circumvent that and the extra charge Spotify added on to pay the 30% cut."
Paul Merrell

Expert System Guns for Google with Semantic Search, Advertising - 0 views

  • Would-be Google AdSense rival Expert System launches its Cogito Semantic Advertiser tool, which discerns the meaning and context of words to provide more relevant ads. By leveraging semantic technologies, Expert System joins a cadre of search providers that includes Microsoft-owned Powerset, Hakia, Yedda and Zoomix.
  • The problem is that AdSense relies on keyword frequency but doesn't drill down into the semantics—the meaning in the words. Cogito Semantic Advertiser attempts to go further by using semantic intelligence to analyze the text on each page and ensure that ads are placed appropriately to increase click-through rates.
  • Asher told me Cogito Semantic Advertiser understands content based on four key methodologies: studying the morphology of words; looking at parts of speech; sentence logic, or the reduction of sentences to subject, verb and object; and disambiguation, which in the case of the jaguar story paired with the Jaguar car ad would determine whether or not the text referred to a car or an animal.
Gonzalo San Gil, PhD.

Online Ad Spend Surpasses Newspapers - eMarketer - 0 views

  •  
    *This is The Modern Change of Media and Marketing Paradigms, something that other Businesses (The Audiovisual and The Music Ones, for example) refuse to understand... [ 2010 will mark the first time marketers put more money into online advertising than newspapers, eMarketer estimates. Total newspaper spending, including advertising in print and online editions, will fall to $25.7 billion in 2010, a decline of 6.6%. Spending on print newspapers alone will fall more steeply to $22.8 billion. Meanwhile, a rise of 13.9% will push US online ad spending up to $25.8 billion by year's end. ... ]
Gonzalo San Gil, PhD.

Does your phone company track you? | Ars Technica - 0 views

  •  
    "Twitter's mobile ad arm lets clients use a hidden tracking number created by Verizon. by Julia Angwin and Jeff Larson, ProPublica Nov 3 2014, 9:30pm CET" [# ! Of course, ... # ! ... '#They' do. # ! Too. # ! #we are no #more than #salable #products... for Them... # ! ... and a 'Potential #Threat' for 'The Others'. # ! Welcome to 'The (#Dirty) #Market #Rules' #Era. # ! #NoFear, just take #Your #measures...]
  •  
    "Twitter's mobile ad arm lets clients use a hidden tracking number created by Verizon. by Julia Angwin and Jeff Larson, ProPublica Nov 3 2014, 9:30pm CET"
Gonzalo San Gil, PhD.

That Study In Every Paper Claiming Title II Will Result In $15 Billion In New Taxes? Ye... - 1 views

  •  
    "As also noted, the claim was quickly picked up by media outlets and large ISP executives as the centerpiece of a campaign to convince the press, public and regulators that Title II will result in the sky falling. The cable industry was also quick to use the study as the underpinning of a series of ads pretending they care about soaring consumer bills (adorabl"
  •  
    "As also noted, the claim was quickly picked up by media outlets and large ISP executives as the centerpiece of a campaign to convince the press, public and regulators that Title II will result in the sky falling. The cable industry was also quick to use the study as the underpinning of a series of ads pretending they care about soaring consumer bills (adorabl"
« First ‹ Previous 41 - 60 of 237 Next › Last »
Showing 20 items per page