Skip to main content

Home/ Future of the Web/ Group items tagged book

Rss Feed Group items tagged

2More

An Important Kindle request - 0 views

  • A Message from the Amazon Books Team Dear Readers, Just ahead of World War II, there was a radical invention that shook the foundations of book publishing. It was the paperback book. This was a time when movie tickets cost 10 or 20 cents, and books cost $2.50. The new paperback cost 25 cents — it was ten times cheaper. Readers loved the paperback and millions of copies were sold in just the first year. With it being so inexpensive and with so many more people able to afford to buy and read books, you would think the literary establishment of the day would have celebrated the invention of the paperback, yes? Nope. Instead, they dug in and circled the wagons. They believed low cost paperbacks would destroy literary culture and harm the industry (not to mention their own bank accounts). Many bookstores refused to stock them, and the early paperback publishers had to use unconventional methods of distribution — places like newsstands and drugstores. The famous author George Orwell came out publicly and said about the new paperback format, if "publishers had any sense, they would combine against them and suppress them." Yes, George Orwell was suggesting collusion. Well… history doesn't repeat itself, but it does rhyme.
  • Fast forward to today, and it's the e-book's turn to be opposed by the literary establishment. Amazon and Hachette — a big US publisher and part of a $10 billion media conglomerate — are in the middle of a business dispute about e-books. We want lower e-book prices. Hachette does not. Many e-books are being released at $14.99 and even $19.99. That is unjustifiably high for an e-book. With an e-book, there's no printing, no over-printing, no need to forecast, no returns, no lost sales due to out of stock, no warehousing costs, no transportation costs, and there is no secondary market — e-books cannot be resold as used books. E-books can and should be less expensive. Perhaps channeling Orwell's decades old suggestion, Hachette has already been caught illegally colluding with its competitors to raise e-book prices. So far those parties have paid $166 million in penalties and restitution. Colluding with its competitors to raise prices wasn't only illegal, it was also highly disrespectful to Hachette's readers. The fact is many established incumbents in the industry have taken the position that lower e-book prices will "devalue books" and hurt "Arts and Letters." They're wrong. Just as paperbacks did not destroy book culture despite being ten times cheaper, neither will e-books. On the contrary, paperbacks ended up rejuvenating the book industry and making it stronger. The same will happen with e-books.
49More

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
4More

The best way to read Glenn Greenwald's 'No Place to Hide' - 0 views

  • Journalist Glenn Greenwald just dropped a pile of new secret National Security Agency documents onto the Internet. But this isn’t just some haphazard WikiLeaks-style dump. These documents, leaked to Greenwald last year by former NSA contractor Edward Snowden, are key supplemental reading material for his new book, No Place to Hide, which went on sale Tuesday. Now, you could just go buy the book in hardcover and read it like you would any other nonfiction tome. Thanks to all the additional source material, however, if any work should be read on an e-reader or computer, this is it. Here are all the links and instructions for getting the most out of No Place to Hide.
  • Greenwald has released two versions of the accompanying NSA docs: a compressed version and an uncompressed version. The only difference between these two is the quality of the PDFs. The uncompressed version clocks in at over 91MB, while the compressed version is just under 13MB. For simple reading purposes, just go with the compressed version and save yourself some storage space. Greenwald also released additional “notes” for the book, which are just citations. Unless you’re doing some scholarly research, you can skip this download.
  • No Place to Hide is, of course, available on a wide variety of ebook formats—all of which are a few dollars cheaper than the hardcover version, I might add. Pick your e-poison: Amazon, Nook, Kobo, iBooks. Flipping back and forth Each page of the documents includes a corresponding page number for the book, to allow readers to easily flip between the book text and the supporting documents. If you use the Amazon Kindle version, you also have the option of reading Greenwald’s book directly on your computer using the Kindle for PC app or directly in your browser. Yes, that may be the worst way to read a book. In this case, however, it may be the easiest way to flip back and forth between the book text and the notes and supporting documents. Of course, you can do the same on your e-reader—though it can be a bit of a pain. Those of you who own a tablet are in luck, as they provide the best way to read both ebooks and PDF files. Simply download the book using the e-reader app of your choice, download the PDFs from Greenwald’s website, and dig in. If you own a Kindle, Nook, or other ereader, you may have to convert the PDFs into a format that works well with your device. The Internet is full of tools and how-to guides for how to do this. Here’s one:
  • ...1 more annotation...
  • Kindle users also have the option of using Amazon’s Whispernet service, which converts PDFs into a format that functions best on the company’s e-reader. That will cost you a small fee, however—$0.15 per megabyte, which means the compressed Greenwald docs will cost you a whopping $1.95.
2More

Best Comic Book Readers - Icecream Tech Digest - 0 views

  •  
    Comic books are a widely loved part of modern entertainment culture and are no less valuable than movies or books. For example, San Diego Comic-Con International is attended by hundreds of thousands of people every year. Those comic books fans, … Continue reading →
  •  
    Comic books are a widely loved part of modern entertainment culture and are no less valuable than movies or books. For example, San Diego Comic-Con International is attended by hundreds of thousands of people every year. Those comic books fans, … Continue reading →
1More

Why Kindle Should Be An Open Book - Tim O'Reilly at Forbes.com - 0 views

  •  
    Like someone finding out that the rapture has happened, and they've been left behind, Tim O'Reilly shakes his fists and shouts to the heavens that Amazon must support open standards. He argues that in-spite of incredible market success, the Amazon Kindle will fail because the document format is not Open. He even argues that Apple, with the iPod and iPhone, have figured out how to blend Open Web formats and application development with proprietary hardware initiatives....

    "The Amazon Kindle has sparked huge media interest in e-books and has seemingly jump-started the market. Its instant wireless access to hundreds of thousands of e-books and seamless one-click purchasing process would seem to give it an enormous edge over other dedicated e-book platforms. Yet I have a bold prediction: Unless Amazon embraces open e-book standards like epub, which allow readers to read books on a variety of devices, the Kindle will be gone within two or three years."

    TO points to ePub as an open format, apparently not realizing the format falls far short of Open Web advances designed to enable a complete publication-typesetter model. The WebKit and Mozilla open source communities are pushing the envelope of Open Web development with an extremely advanced document model based on HTML5, CSS3, SVG/Canvas, and JavaScript4+. ePub on the other hand is stuck in 1998, supporting the aging HTML4 - CSS2.1 specs. Very sad.

2More

Readium at the London Book Fair 2014: Open Source for an Open Publishing Ecosystem: Rea... - 0 views

  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-Android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-Android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
3More

Google book-scanning project legal, says U.S. appeals court | Reuters - 0 views

  • A U.S. appeals court ruled on Friday that Google's massive effort to scan millions of books for an online library does not violate copyright law, rejecting claims from a group of authors that the project illegally deprives them of revenue.The 2nd U.S. Circuit Court of Appeals in New York rejected infringement claims from the Authors Guild and several individual writers, and found that the project provides a public service without violating intellectual property law.
  • Google argued that the effort would actually boost book sales by making it easier for readers to find works, while introducing them to books they might not otherwise have seen.A lawyer for the authors did not immediately respond to a request for comment.Google had said it could face billions of dollars in potential damages if the authors prevailed. Circuit Judge Denny Chin, who oversaw the case at the lower court level, dismissed the litigation in 2013, prompting the authors' appeal.Chin found Google's scanning of tens of millions of books and posting "snippets" online constituted "fair use" under U.S. copyright law.A unanimous three-judge appeals panel said the case "tests the boundaries of fair use," but found Google's practices were ultimately allowed under the law. "Google’s division of the page into tiny snippets is designed to show the searcher just enough context surrounding the searched term to help her evaluate whether the book falls within the scope of her interest (without revealing so much as to threaten the author’s copyright interests)," Circuit Judge Pierre Leval wrote for the court.
  • The 2nd Circuit had previously rejected a similar lawsuit from the Authors Guild in June 2014 against a consortium of universities and research libraries that built a searchable online database of millions of scanned works.The case is Authors Guild v. Google Inc, 2nd U.S. Circuit Court of Appeals, No. 13-4829.
6More

2nd Cir. Affirms That Creation of Full-Text Searchable Database of Works Is Fair Use | ... - 0 views

  • The fair use doctrine permits the unauthorized digitization of copyrighted works in order to create a full-text searchable database, the U.S. Court of Appeals for the Second Circuit ruled June 10.Affirming summary judgment in favor of a consortium of university libraries, the court also ruled that the fair use doctrine permits the unauthorized conversion of those works into accessible formats for use by persons with disabilities, such as the blind.
  • The dispute is connected to the long-running conflict between Google Inc. and various authors of books that Google included in a mass digitization program. In 2004, Google began soliciting the participation of publishers in its Google Print for Publishers service, part of what was then called the Google Print project, aimed at making information available for free over the Internet.Subsequently, Google announced a new project, Google Print for Libraries. In 2005, Google Print was renamed Google Book Search and it is now known simply as Google Books. Under this program, Google made arrangements with several of the world's largest libraries to digitize the entire contents of their collections to create an online full-text searchable database.The announcement of this program triggered a copyright infringement action by the Authors Guild that continues to this day.
  • Part of the deal between Google and the libraries included an offer by Google to hand over to the libraries their own copies of the digitized versions of their collections.In 2011, a group of those libraries announced the establishment of a new service, called the HathiTrust digital library, to which the libraries would contribute their digitized collections. This database of copies is to be made available for full-text searching and preservation activities. Additionally, it is intended to offer free access to works to individuals who have “print disabilities.” For works under copyright protection, the search function would return only a list of page numbers that a search term appeared on and the frequency of such appearance.
  • ...3 more annotations...
  • Turning to the fair use question, the court first concluded that the full-text search function of the Hathitrust Digital Library was a “quintessentially transformative use,” and thus constituted fair use. The court said:the result of a word search is different in purpose, character, expression, meaning, and message from the page (and the book) from which it is drawn. Indeed, we can discern little or no resemblance between the original text and the results of the HDL full-text search.There is no evidence that the Authors write with the purpose of enabling text searches of their books. Consequently, the full-text search function does not “supersede[ ] the objects [or purposes] of the original creation.”Turning to the fourth fair use factor—whether the use functions as a substitute for the original work—the court rejected the argument that such use represents lost sales to the extent that it prevents the future development of a market for licensing copies of works to be used in full-text searches.However, the court emphasized that the search function “does not serve as a substitute for the books that are being searched.”
  • The court also rejected the argument that the database represented a threat of a security breach that could result in the full text of all the books becoming available for anyone to access. The court concluded that Hathitrust's assertions of its security measures were unrebutted.Thus, the full-text search function was found to be protected as fair use.
  • The court also concluded that allowing those with print disabilities access to the full texts of the works collected in the Hathitrust database was protected as fair use. Support for this conclusion came from the legislative history of the Copyright Act's fair use provision, 17 U.S.C. §107.
2More

Joint - Dear Colleague Letter: Electronic Book Readers - 0 views

  • U.S. Department of Justice Civil Rights Division U.S. Department of Education Office for Civil Rights
  •  
    June 29, 2010 Dear College or University President: We write to express concern on the part of the Department of Justice and the Department of Education that colleges and universities are using electronic book readers that are not accessible to students who are blind or have low vision and to seek your help in ensuring that this emerging technology is used in classroom settings in a manner that is permissible under federal law. A serious problem with some of these devices is that they lack an accessible text-to-speech function. Requiring use of an emerging technology in a classroom environment when the technology is inaccessible to an entire population of individuals with disabilities - individuals with visual disabilities - is discrimination prohibited by the Americans with Disabilities Act of 1990 (ADA) and Section 504 of the Rehabilitation Act of 1973 (Section 504) unless those individuals are provided accommodations or modifications that permit them to receive all the educational benefits provided by the technology in an equally effective and equally integrated manner. ... The Department of Justice recently entered into settlement agreements with colleges and universities that used the Kindle DX, an inaccessible, electronic book reader, in the classroom as part of a pilot study with Amazon.com, Inc. In summary, the universities agreed not to purchase, require, or recommend use of the Kindle DX, or any other dedicated electronic book reader, unless or until the device is fully accessible to individuals who are blind or have low vision, or the universities provide reasonable accommodation or modification so that a student can acquire the same information, engage in the same interactions, and enjoy the same services as sighted students with substantially equivalent ease of use. The texts of these agreements may be viewed on the Department of Justice's ADA Web site, www.ada.gov. (To find these settlements on www.ada.gov, search for "Kindle.") Consisten
4More

International Digital Publishing Forum (formerly Open eBook Forum) - 0 views

shared by Paul Merrell on 29 May 08 - Cached
  • EPUB Support from list of Publishers An Open Letter from AAP to IDPF
  • What is EPUB, .epub, OPS/OCF & OEB? ".epub" is the file extension of an XML format for reflowable digital books and publications. ".epub" is composed of three open standards, the Open Publication Structure (OPS), Open Packaging Format (OPF) and Open Container Format (OCF), produced by the IDPF. "EPUB" allows publishers to produce and send a single digital publication file through distribution and offers consumers interoperability between software/hardware for unencrypted reflowable digital books and other publications. The Open eBook Publication Structure or "OEB", originally produced in 1999, is the precursor to OPS. For the latest on IDPF standards, sample files and companies who have implemented our specifications, please visit our public forums.  Getting started? Visit our FAQ's.
  •  
    Will ePub be the standard that converges the desktop, the server, devices, and the Web? ePub is an implementation of the W3C Compound Document Formats interoperability framework with excellent packaging, container, and markup components. ePub is also strongly integrated with Daisy XML for accessibility, "talking books," and document structure, hinting at a voice-interactive future for publishing. ePub has been developed as a vendor-neutral standard and is being implemented by a large number of major book publishers globally, a factor that should spur major development of both editing and rendering software and devices.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
2More

Publishers Are Lining Up Behind 'Netflix for Books' Services. But Why? | WIRED [# Note ... - 0 views

  •  
    [# ! we are 'The Product'. Don't we deserve a 'commission' (like free sharing, for example...)] "The money may not be the real reason publishers are coming around, however. The greatest value of these "Netflix for books" services could be that these startups share valuable reader data, says James McQuivey, an analyst with Forrester Research."
  •  
    [# ! we are 'The Product'. Don't we deserve a 'commission' (like free sharing, for example...)] "The money may not be the real reason publishers are coming around, however. The greatest value of these "Netflix for books" services could be that these startups share valuable reader data, says James McQuivey, an analyst with Forrester Research."
8More

Internet Archive: Scanning Services - 1 views

  • Digitizing Print Collections with the Internet Archive Open and free online access, permanent storage, unlimited downloads and lifetime file management. We can help digitize your collections in 4 simple steps:
  • In addition to permanent hosting on archive.org, your books will be integrated with Open Library, openlibrary.org, a page on the web for every book.
  • Non-destructive color scanning using our Scribe system at one of our scanning centers across the globe. Complete MARC records, Dublin Core & XML, just 10c USD per image and a small set up charge per item.
  • ...4 more annotations...
  • Create and upload high-quality JP2000 images; persistent identifiers, lifetime hosting of files, lifetime management of file system and file access.
  • Create high quality PDF A files; run OCR across texts to allow "search inside" all books. Add to Internet Archive search engine; display via our open source Book Reader
  • 2,000,000 books online 600 million pages scanned 1,500 book scanned each day 15 million downloads each month 33 scanning centers in 7 countries 3.5 petabytes of storage 8 Gb per second bandwidth
  • Library of Congress Harvard University The New York Public Library Smithsonian Institution The Getty Research Institute University of California University of Toronto Biodiversity Heritage Library Boston Library Consortium C.A.R.L.I. Johns Hopkins University Allen County Public Library Lyrasis Massachusetts Institute of technology State Library of Massachusetts . . . and over 1,000 other Open Content Alliance partners
  •  
    I've been looking for a permanent online home for a couple of historical works I co-authored. My guidiing criterion has been the best chance of the works' long-term survival in a publicly-accessible form after my death. I think I may have just found my solution. 
3More

Tell Congress: My Phone Calls are My Business. Reform the NSA. | EFF Action Center - 3 views

  • The USA PATRIOT Act granted the government powerful new spying capabilities that have grown out of control—but the provision that the FBI and NSA have been using to collect the phone records of millions of innocent people expires on June 1. Tell Congress: it’s time to rethink out-of-control spying. A vote to reauthorize Section 215 is a vote against the Constitution.
  • On June 5, 2013, the Guardian published a secret court order showing that the NSA has interpreted Section 215 to mean that, with the help of the FBI, it can collect the private calling records of millions of innocent people. The government could even try to use Section 215 for bulk collection of financial records. The NSA’s defenders argue that invading our privacy is the only way to keep us safe. But the White House itself, along with the President’s Review Board has said that the government can accomplish its goals without bulk telephone records collection. And the Privacy and Civil Liberties Oversight Board said, “We have not identified a single instance involving a threat to the United States in which [bulk collection under Section 215 of the PATRIOT Act] made a concrete difference in the outcome of a counterterrorism investigation.” Since June of 2013, we’ve continued to learn more about how out of control the NSA is. But what has not happened since June is legislative reform of the NSA. There have been myriad bipartisan proposals in Congress—some authentic and some not—but lawmakers didn’t pass anything. We need comprehensive reform that addresses all the ways the NSA has overstepped its authority and provides the NSA with appropriate and constitutional tools to keep America safe. In the meantime, tell Congress to take a stand. A vote against reauthorization of Section 215 is a vote for the Constitution.
  •  
    EFF has launched an email campagin to press members of Congress not to renew sectiion 215 of the Patriot Act when it expires on June 1, 2015.   Sectjon 215 authorizes FBI officials to "make an application for an order requiring the production of *any tangible things* (including books, records, papers, documents, and other items) for an investigation to obtain foreign intelligence information not concerning a United States person or to protect against international terrorism or clandestine intelligence activities, provided that such investigation of a United States person is not conducted solely upon the basis of activities protected by the first amendment to the Constitution." http://www.law.cornell.edu/uscode/text/50/1861 The section has been abused to obtain bulk collecdtion of all telephone records for the NSA's storage and processing.But the section goes farther and lists as specific examples of records that can be obtained under section 215's authority, "library circulation records, library patron lists, book sales records, book customer lists, firearms sales records, tax return records, educational records, or medical records."  Think of the NSA's voracious appetite for new "haystacks" it can store  and search in its gigantic new data center in Utah. Then ask yourself, "do I want the NSA to obtain all of my personal data, store it, and search it at will?" If your anser is "no," you might consider visiting this page to send your Congress critters an email urging them to vote against renewal of section 215 and to vote for other NSA reforms listed in the EFF sample email text. Please do not procrastinate. Do it now, before you forget. Every voice counts. 
7More

Tech firms and privacy groups press for curbs on NSA surveillance powers - The Washingt... - 0 views

  • The nation’s top technology firms and a coalition of privacy groups are urging Congress to place curbs on government surveillance in the face of a fast-approaching deadline for legislative action. A set of key Patriot Act surveillance authorities expire June 1, but the effective date is May 21 — the last day before Congress breaks for a Memorial Day recess. In a letter to be sent Wednesday to the Obama administration and senior lawmakers, the coalition vowed to oppose any legislation that, among other things, does not ban the “bulk collection” of Americans’ phone records and other data.
  • We know that there are some in Congress who think that they can get away with reauthorizing the expiring provisions of the Patriot Act without any reforms at all,” said Kevin Bankston, policy director of New America Foundation’s Open Technology Institute, a privacy group that organized the effort. “This letter draws a line in the sand that makes clear that the privacy community and the Internet industry do not intend to let that happen without a fight.” At issue is the bulk collection of Americans’ data by intelligence agencies such as the National Security Agency. The NSA’s daily gathering of millions of records logging phone call times, lengths and other “metadata” stirred controversy when it was revealed in June 2013 by former NSA contractor Edward Snowden. The records are placed in a database that can, with a judge’s permission, be searched for links to foreign terrorists.They do not include the content of conversations.
  • That program, placed under federal surveillance court oversight in 2006, was authorized by the court in secret under Section 215 of the Patriot Act — one of the expiring provisions. The public outcry that ensued after the program was disclosed forced President Obama in January 2014 to call for an end to the NSA’s storage of the data. He also appealed to Congress to find a way to preserve the agency’s access to the data for counterterrorism information.
  • ...3 more annotations...
  • Despite growing opposition in some quarters to ending the NSA’s program, a “clean” authorization — one that would enable its continuation without any changes — is unlikely, lawmakers from both parties say. Sen. Ron Wyden (D-Ore.), a leading opponent of the NSA’s program in its current format, said he would be “surprised if there are 60 votes” in the Senate for that. In the House, where there is bipartisan support for reining in surveillance, it’s a longer shot still. “It’s a toxic vote back in your district to reauthorize the Patriot Act, if you don’t get some reforms” with it, said Rep. Thomas Massie (R-Ky.). The House last fall passed the USA Freedom Act, which would have ended the NSA program, but the Senate failed to advance its own version.The House and Senate judiciary committees are working to come up with new bipartisan legislation to be introduced soon.
  • The tech firms and privacy groups’ demands are a baseline, they say. Besides ending bulk collection, they want companies to have the right to be more transparent in reporting on national security requests and greater declassification of opinions by the Foreign Intelligence Surveillance Court.
  • Some legal experts have pointed to a little-noticed clause in the Patriot Act that would appear to allow bulk collection to continue even if the authority is not renewed. Administration officials have conceded privately that a legal case probably could be made for that, but politically it would be a tough sell. On Tuesday, a White House spokesman indicated the administration would not seek to exploit that clause. “If Section 215 sunsets, we will not continue the bulk telephony metadata program,” National Security Council spokesman Edward Price said in a statement first reported by Reuters. Price added that allowing Section 215 to expire would result in the loss of a “critical national security tool” used in investigations that do not involve the bulk collection of data. “That is why we have underscored the imperative of Congressional action in the coming weeks, and we welcome the opportunity to work with lawmakers on such legislation,” he said.
  •  
    I omitted some stuff about opposition to sunsetting the provisions. They  seem to forget, as does Obama, that the proponents of the FISA Court's expansive reading of section 215 have not yet come up with a single instance where 215-derived data caught a single terrorist or prevented a single act of terrorism. Which means that if that data is of some use, it ain't in fighting terrorism, the purpose of the section.  Patriot Act § 215 is codified as 50 USCS § 1861, https://www.law.cornell.edu/uscode/text/50/1861 That section authorizes the FBI to obtain an iorder from the FISA Court "requiring the production of *any tangible things* (including books, records, papers, documents, and other items)."  Specific examples (a non-exclusive list) include: the production of library circulation records, library patron lists, book sales records, book customer lists, firearms sales records, tax return records, educational records, or medical records containing information that would identify a person." The Court can order that the recipient of the order tell no one of its receipt of the order or its response to it.   In other words, this is about way more than your telephone metadata. Do you trust the NSA with your medical records? 
2More

Best ePub Readers for Windows - Icecream Tech Digest - 0 views

  •  
    One of the best ways of spending a winter evening is with a great book. Right away we imagine super cozy pictures of a person being in a comfortable position on a sofa under a warm blanket with a book in hand. However, the number of people that imag…
  •  
    One of the best ways of spending a winter evening is with a great book. Right away we imagine super cozy pictures of a person being in a comfortable position on a sofa under a warm blanket with a book in hand. However, the number of people that imag…
1More

MediaFuturist: A gift for you: free PDFs of my last 3 books: Music 2.0, The End of Cont... - 2 views

  •  
    [I want to start 2011 in a renewed spirit of generosity and sharing, so here are the complete PDFs of my last 3 books, for free; provided under a Creative Commons,non-commercial, share-alike, attribution license (see below). If you still want to buy the dead-tree versions of these books (or donate something for the free PDFs - yes, that's an option, too;), you can visit my Lulu Store, or go to Amazon.com, or check out my 'Paying for Gerd' page. You can also return the favor by blogging or tweeting of Facebook-liking my stuff. Thanks, and enjoy, and have a great 2011. Update: my free videos (50+ keynotes and presentations) are here, the iTunes podcast feed is here (just subscribe to download all videos to your iPod / iPad / iPhone, or computers), and my free slideshows (90+) are here, on Slideshare :)]
2More

Free Linux Essentials E-Book | Linux Training - LPI Certified Training & Exams - 0 views

  •  
    "Hot on the heels of the recently release Linux Essentials exam and training courses, a free ebook has been released to the community.under a creative commons license! The book covers the objectives of the LPI LES exam. a vendor neutral, measure of foundation knowledge in Linux and Open Source."
  •  
    "Hot on the heels of the recently release Linux Essentials exam and training courses, a free ebook has been released to the community.under a creative commons license! The book covers the objectives of the LPI LES exam. a vendor neutral, measure of foundation knowledge in Linux and Open Source."
1More

A List Apart: Articles: Printing a Book with CSS: Boom! - 0 views

  •  
    HTML is the dominant document format on the web and CSS is used to style most HTML pages. But, are they suitable for off-screen use? Can CSS be used for serious print jobs? To find out, we decided to take the ultimate challenge: to produce the next edition of our book directly from HTML and CSS files. In this article we sketch our solution and quote from the style sheet used. Towards the end we describe the book microformat (boom!) we developed in the process.
1More

Hate Amazon? 6 alternatives for buying books, electronics and more | ITworld - 0 views

  •  
    "Here's where to buy if you want to avoid the Web's biggest bully By Preston Gralla May 28, 2014, 11:35 AM - The Amazon bully just got nastier, with Amazon refusing to take orders for upcoming books from publisher Hachette for authors including J.K. Rowling, Tina Fey, and others, because Hachette won't agree to Amazon's lowball pricing demands. Tired of buying from the world's nastiest Web site? There are plenty of other alternatives --- here are six of my favorites."
2More

PATRIOT Act spying programs on death watch - Seung Min Kim and Kate Tummarello - POLITICO - 0 views

  • With only days left to act and Rand Paul threatening a filibuster, Senate Republicans remain deeply divided over the future of the PATRIOT Act and have no clear path to keep key government spying authorities from expiring at the end of the month. Crucial parts of the PATRIOT Act, including a provision authorizing the government’s controversial bulk collection of American phone records, first revealed by Edward Snowden, are due to lapse May 31. That means Congress has barely a week to figure out a fix before before lawmakers leave town for Memorial Day recess at the end of the next week. Story Continued Below The prospects of a deal look grim: Senate Majority Leader Mitch McConnell on Thursday night proposed just a two-month extension of expiring PATRIOT Act provisions to give the two sides more time to negotiate, but even that was immediately dismissed by critics of the program.
  •  
    A must-read. The major danger is that the the Senate could pass the USA Freedom Act, which has already been passed by the House. Passage of that Act, despite its name, would be bad news for civil liberties.  Now is the time to let your Congress critters know that you want them to fight to the Patriot Act provisions expire on May 31, without any replacement legislation.  Keep in mind that Section 502 does not apply just to telephone metadata. It authorizes the FBI to gather without notice to their victims "any tangible thing", specifically including as examples "library circulation records, library patron lists, book sales records, book customer lists, firearms sales records, tax return records, educational records, or medical records containing information that would identify a person." The breadth of the section is illustrated by telephone metadata not even being mentioned in the section.  NSA going after your medical records souand far fetched? Former NSA technical director William Binney says they're already doing it: "Binney alludes to even more extreme intelligence practices that are not yet public knowledge, including the collection of Americans' medical data, the collection and use of client-attorney conversations, and law enforcement agencies' "direct access," without oversight, to NSA databases." https://consortiumnews.com/2015/03/05/seeing-the-stasi-through-nsa-eyes/ So please, contact your Congress critters right now and tell them to sunset the Patriot Act NOW. This will be decided in the next few days so the sooner you contact them the better. 
1 - 20 of 91 Next › Last »
Showing 20 items per page