Skip to main content

Home/ Future of the Web/ Group items tagged while

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gary Edwards

The True Story of How the Patent Bar Captured a Court and Shrank the Intellectual Commo... - 1 views

  • The change in the law wrought by the Federal Circuit can also be viewed substantively through the controversy over software patents. Throughout the 1960s, the USPTO refused to award patents for software innovations. However, several of the USPTO’s decisions were overruled by the patent-friendly U.S. Court of Customs and Patent Appeals, which ordered that software patents be granted. In Gottschalk v. Benson (1972) and Parker v. Flook (1978), the U.S. Supreme Court reversed the Court of Customs and Patent Appeals, holding that mathematical algorithms (and therefore software) were not patentable subject matter. In 1981, in Diamond v. Diehr, the Supreme Court upheld a software patent on the grounds that the patent in question involved a physical process—the patent was issued for software used in the molding of rubber. While affirming their prior ruling that mathematical formulas are not patentable in the abstract, the Court held that an otherwise patentable invention did not become unpatentable simply because it utilized a computer.
  • In the hands of the newly established Federal Circuit, however, this small scope for software patents in precedent was sufficient to open the floodgates. In a series of decisions culminating in State Street Bank v. Signature Financial Group (1998), the Federal Circuit broadened the criteria for patentability of software and business methods substantially, allowing protection as long as the innovation “produces a useful, concrete and tangible result.” That broadened criteria led to an explosion of low-quality software patents, from Amazon’s 1-Click checkout system to Twitter’s pull-to-refresh feature on smartphones. The GAO estimates that more than half of all patents granted in recent years are software-related. Meanwhile, the Supreme Court continues to hold, as in Parker v. Flook, that computer software algorithms are not patentable, and has begun to push back against the Federal Circuit. In Bilski v. Kappos (2010), the Supreme Court once again held that abstract ideas are not patentable, and in Alice v. CLS (2014), it ruled that simply applying an abstract idea on a computer does not suffice to make the idea patent-eligible. It still is not clear what portion of existing software patents Alice invalidates, but it could be a significant one.
  • Supreme Court justices also recognize the Federal Circuit’s insubordination. In oral arguments in Carlsbad Technology v. HIF Bio (2009), Chief Justice John Roberts joked openly about it:
  • ...17 more annotations...
  • The Opportunity of the Commons
  • As a result of the Federal Circuit’s pro-patent jurisprudence, our economy has been flooded with patents that would otherwise not have been granted. If more patents meant more innovation, then we would now be witnessing a spectacular economic boom. Instead, we have been living through what Tyler Cowen has called a Great Stagnation. The fact that patents have increased while growth has not is known in the literature as the “patent puzzle.” As Michele Boldrin and David Levine put it, “there is no empirical evidence that [patents] serve to increase innovation and productivity, unless productivity is identified with the number of patents awarded—which, as evidence shows, has no correlation with measured productivity.”
  • While more patents have not resulted in faster economic growth, they have resulted in more patent lawsuits.
  • Software patents have characteristics that make them particularly susceptible to litigation. Unlike, say, chemical patents, software patents are plagued by a problem of description. How does one describe a software innovation in such a way that anyone searching for it will easily find it? As Christina Mulligan and Tim Lee demonstrate, chemical formulas are indexable, meaning that as the number of chemical patents grow, it will still be easy to determine if a molecule has been patented. Since software innovations are not indexable, they estimate that “patent clearance by all firms would require many times more hours of legal research than all patent lawyers in the United States can bill in a year. The result has been an explosion of patent litigation.” Software and business method patents, estimate James Bessen and Michael Meurer, are 2 and 7 times more likely to be litigated than other patents, respectively (4 and 13 times more likely than chemical patents).
  • Software patents make excellent material for predatory litigation brought by what are often called “patent trolls.”
  • Trolls use asymmetries in the rules of litigation to legally extort millions of dollars from innocent parties. For example, one patent troll, Innovatio IP Ventures, LLP, acquired patents that implicated Wi-Fi. In 2011, it started sending demand letters to coffee shops and hotels that offered wireless Internet access, offering to settle for $2,500 per location. This amount was far in excess of the 9.56 cents per device that Innovatio was entitled to under the “Fair, Reasonable, and Non-Discriminatory” licensing promises attached to their portfolio, but it was also much less than the cost of trial, and therefore it was rational for firms to pay. Cisco stepped in and spent $13 million in legal fees on the case, and settled on behalf of their customers for 3.2 cents per device. Other manufacturers had already licensed Innovatio’s portfolio, but that didn’t stop their customers from being targeted by demand letters.
  • Litigation cost asymmetries are magnified by the fact that most patent trolls are nonpracticing entities. This means that when patent infringement trials get to the discovery phase, they will cost the troll very little—a firm that does not operate a business has very few records to produce.
  • But discovery can cost a medium or large company millions of dollars. Using an event study methodology, James Bessen and coauthors find that infringement lawsuits by nonpracticing entities cost publicly traded companies $83 billion per year in stock market capitalization, while plaintiffs gain less than 10 percent of that amount.
  • Software patents also reduce innovation in virtue of their cumulative nature and the fact that many of them are frequently inputs into a single product. Law professor Michael Heller coined the phrase “tragedy of the anticommons” to refer to a situation that mirrors the well-understood “tragedy of the commons.” Whereas in a commons, multiple parties have the right to use a resource but not to exclude others, in an anticommons, multiple parties have the right to exclude others, and no one is therefore able to make effective use of the resource. The tragedy of the commons results in overuse of the resource; the tragedy of the anticommons results in underuse.
  • In order to cope with the tragedy of the anticommons, we should carefully investigate the opportunity of  the commons. The late Nobelist Elinor Ostrom made a career of studying how communities manage shared resources without property rights. With appropriate self-governance institutions, Ostrom found again and again that a commons does not inevitably lead to tragedy—indeed, open access to shared resources can provide collective benefits that are not available under other forms of property management.
  • This suggests that—litigation costs aside—patent law could be reducing the stock of ideas rather than expanding it at current margins.
  • Advocates of extensive patent protection frequently treat the commons as a kind of wasteland. But considering the problems in our patent system, it is worth looking again at the role of well-tailored limits to property rights in some contexts. Just as we all benefit from real property rights that no longer extend to the highest heavens, we would also benefit if the scope of patent protection were more narrowly drawn.
  • Reforming the Patent System
  • This analysis raises some obvious possibilities for reforming the patent system. Diane Wood, Chief Judge of the 7th Circuit, has proposed ending the Federal Circuit’s exclusive jurisdiction over patent appeals—instead, the Federal Circuit could share jurisdiction with the other circuit courts. While this is a constructive suggestion, it still leaves the door open to the Federal Circuit playing “a leading role in shaping patent law,” which is the reason for its capture by patent interests. It would be better instead simply to abolish the Federal Circuit and return to the pre-1982 system, in which patents received no special treatment in appeals. This leaves open the possibility of circuit splits, which the creation of the Federal Circuit was designed to mitigate, but there are worse problems than circuit splits, and we now have them.
  • Another helpful reform would be for Congress to limit the scope of patentable subject matter via statute. New Zealand has done just that, declaring that software is “not an invention” to get around WTO obligations to respect intellectual property. Congress should do the same with respect to both software and business methods.
  • Finally, even if the above reforms were adopted, there would still be a need to address the asymmetries in patent litigation that result in predatory “troll” lawsuits. While the holding in Alice v. CLS arguably makes a wide swath of patents invalid, those patents could still be used in troll lawsuits because a ruling of invalidity for each individual patent might not occur until late in a trial. Current legislation in Congress addresses this class of problem by mandating disclosures, shifting fees in the case of spurious lawsuits, and enabling a review of the patent’s validity before a trial commences.
  • What matters for prosperity is not just property rights in the abstract, but good property-defining institutions. Without reform, our patent system will continue to favor special interests and forestall economic growth.
  •  
    "Libertarians intuitively understand the case for patents: just as other property rights internalize the social benefits of improvements to land, automobile maintenance, or business investment, patents incentivize the creation of new inventions, which might otherwise be undersupplied. So far, so good. But it is important to recognize that the laws that govern property, intellectual or otherwise, do not arise out of thin air. Rather, our political institutions, with all their virtues and foibles, determine the contours of property-the exact bundle of rights that property holders possess, their extent, and their limitations. Outlining efficient property laws is not a trivial problem. The optimal contours of property are neither immutable nor knowable a priori. For example, in 1946, the U.S. Supreme Court reversed the age-old common law doctrine that extended real property rights to the heavens without limit. The advent of air travel made such extensive property rights no longer practicable-airlines would have had to cobble together a patchwork of easements, acre by acre, for every corridor through which they flew, and they would have opened themselves up to lawsuits every time their planes deviated from the expected path. The Court rightly abridged property rights in light of these empirical realities. In defining the limits of patent rights, our political institutions have gotten an analogous question badly wrong. A single, politically captured circuit court with exclusive jurisdiction over patent appeals has consistently expanded the scope of patentable subject matter. This expansion has resulted in an explosion of both patents and patent litigation, with destructive consequences. "
  •  
    I added a comment to the page's article. Patents are antithetical to the precepts of Libertarianism and do not involve Natural Law rights. But I agree with the author that the Court of Appeals for the Federal Circuit should be abolished. It's a failed experiment.
Gonzalo San Gil, PhD.

Problems and Strategies in Financing Voluntary Free Software Projects :: Benjamin Mako ... - 0 views

  •  
    "Benjamin Mako Hill mako@atdot.cc [... Abstract It's easier for a successful volunteer Free Software project to get money than it is to decide how to spend it. While paying developers is easy, it can carry unintended negative consequences. This essay explores problems and benefits of paying developers in volunteer free and open source projects and surveys strategies that projects have used to successfully finance development while maintaining their volunteer nature. ...] This is revision 0.2.1 of this file and was published on November 20, 2012. Revision 0.2 was published on June 10, 2005. Revision 0.1 was published on May 15, 2005 and was written was presented as a talk at Linuxtag 2005 given in Karlsruhe, Germany. Revision 0 was published on May 2004 is based in part of the research and work done for a presentation on the subject given at the International Free Software Forum (FISL) given in Porto Alegre, Brazil."
  •  
    "Benjamin Mako Hill mako@atdot.cc [... Abstract It's easier for a successful volunteer Free Software project to get money than it is to decide how to spend it. While paying developers is easy, it can carry unintended negative consequences. This essay explores problems and benefits of paying developers in volunteer free and open source projects and surveys strategies that projects have used to successfully finance development while maintaining their volunteer nature. ...] This is revision 0.2.1 of this file and was published on November 20, 2012. Revision 0.2 was published on June 10, 2005. Revision 0.1 was published on May 15, 2005 and was written was presented as a talk at Linuxtag 2005 given in Karlsruhe, Germany. Revision 0 was published on May 2004 is based in part of the research and work done for a presentation on the subject given at the International Free Software Forum (FISL) given in Porto Alegre, Brazil."
Paul Merrell

Bulk Collection Under Section 215 Has Ended… What's Next? | Just Security - 0 views

  • The first (and thus far only) roll-back of post-9/11 surveillance authorities was implemented over the weekend: The National Security Agency shuttered its program for collecting and holding the metadata of Americans’ phone calls under Section 215 of the Patriot Act. While bulk collection under Section 215 has ended, the government can obtain access to this information under the procedures specified in the USA Freedom Act. Indeed, some experts have argued that the Agency likely has access to more metadata because its earlier dragnet didn’t cover cell phones or Internet calling. In addition, the metadata of calls made by an individual in the United States to someone overseas and vice versa can still be collected in bulk — this takes place abroad under Executive Order 12333. No doubt the NSA wishes that this was the end of the surveillance reform story and the Paris attacks initially gave them an opening. John Brennan, the Director of the CIA, implied that the attacks were somehow related to “hand wringing” about spying and Sen. Tom Cotton (R-Ark.) introduced a bill to delay the shut down of the 215 program. Opponents of encryption were quick to say: “I told you so.”
  • But the facts that have emerged thus far tell a different story. It appears that much of the planning took place IRL (that’s “in real life” for those of you who don’t have teenagers). The attackers, several of whom were on law enforcement’s radar, communicated openly over the Internet. If France ever has a 9/11 Commission-type inquiry, it could well conclude that the Paris attacks were a failure of the intelligence agencies rather than a failure of intelligence authorities. Despite the passage of the USA Freedom Act, US surveillance authorities have remained largely intact. Section 702 of the FISA Amendments Act — which is the basis of programs like PRISM and the NSA’s Upstream collection of information from Internet cables — sunsets in the summer of 2017. While it’s difficult to predict the political environment that far out, meaningful reform of Section 702 faces significant obstacles. Unlike the Section 215 program, which was clearly aimed at Americans, Section 702 is supposedly targeted at foreigners and only picks up information about Americans “incidentally.” The NSA has refused to provide an estimate of how many Americans’ information it collects under Section 702, despite repeated requests from lawmakers and most recently a large cohort of advocates. The Section 215 program was held illegal by two federal courts (here and here), but civil attempts to challenge Section 702 have run into standing barriers. Finally, while two review panels concluded that the Section 215 program provided little counterterrorism benefit (here and here), they found that the Section 702 program had been useful.
  • There is, nonetheless, some pressure to narrow the reach of Section 702. The recent decision by the European Court of Justice in the safe harbor case suggests that data flows between Europe and the US may be restricted unless the PRISM program is modified to protect the information of Europeans (see here, here, and here for discussion of the decision and reform options). Pressure from Internet companies whose business is suffering — estimates run to the tune of $35 to 180 billion — as a result of disclosures about NSA spying may also nudge lawmakers towards reform. One of the courts currently considering criminal cases which rely on evidence derived from Section 702 surveillance may hold the program unconstitutional either on the basis of the Fourth Amendment or Article III for the reasons set out in this Brennan Center report. A federal district court in Colorado recently rejected such a challenge, although as explained in Steve’s post, the decision did not seriously explore the issues. Further litigation in the European courts too could have an impact on the debate.
  • ...2 more annotations...
  • The US intelligence community’s broadest surveillance authorities are enshrined in Executive Order 12333, which primarily covers the interception of electronic communications overseas. The Order authorizes the collection, retention, and dissemination of “foreign intelligence” information, which includes information “relating to the capabilities, intentions or activities of foreign powers, organizations or persons.” In other words, so long as they are operating outside the US, intelligence agencies are authorized to collect information about any foreign person — and, of course, any Americans with whom they communicate. The NSA has conceded that EO 12333 is the basis of most of its surveillance. While public information about these programs is limited, a few highlights give a sense of the breadth of EO 12333 operations: The NSA gathers information about every cell phone call made to, from, and within the Bahamas, Mexico, Kenya, the Philippines, and Afghanistan, and possibly other countries. A joint US-UK program tapped into the cables connecting internal Yahoo and Google networks to gather e-mail address books and contact lists from their customers. Another US-UK collaboration collected images from video chats among Yahoo users and possibly other webcam services. The NSA collects both the content and metadata of hundreds of millions of text messages from around the world. By tapping into the cables that connect global networks, the NSA has created a database of the location of hundreds of millions of mobile phones outside the US.
  • Given its scope, EO 12333 is clearly critical to those seeking serious surveillance reform. The path to reform is, however, less clear. There is no sunset provision that requires action by Congress and creates an opportunity for exposing privacy risks. Even in the unlikely event that Congress was inclined to intervene, it would have to address questions about the extent of its constitutional authority to regulate overseas surveillance. To the best of my knowledge, there is no litigation challenging EO 12333 and the government doesn’t give notice to criminal defendants when it uses evidence derived from surveillance under the order, so the likelihood of a court ruling is slim. The Privacy and Civil Liberties Oversight Board is currently reviewing two programs under EO 12333, but it is anticipated that much of its report will be classified (although it has promised a less detailed unclassified version as well). While the short-term outlook for additional surveillance reform is challenging, from a longer-term perspective, the distinctions that our law makes between Americans and non-Americans and between domestic and foreign collection cannot stand indefinitely. If the Fourth Amendment is to meaningfully protect Americans’ privacy, the courts and Congress must come to grips with this reality.
Paul Merrell

Internet users raise funds to buy lawmakers' browsing histories in protest | TheHill - 0 views

  • House passes bill undoing Obama internet privacy rule House passes bill undoing Obama internet privacy rule TheHill.com Mesmerizing Slow-Motion Lightning Celebrate #NationalPuppyDay with some adorable puppies on Instagram 5 plants to add to your garden this Spring House passes bill undoing Obama internet privacy rule Inform News. Coming Up... Ed Sheeran responds to his 'baby lookalike' margin: 0px; padding: 0px; borde
  • Great news! The House just voted to pass SJR34. We will finally be able to buy the browser history of all the Congresspeople who voted to sell our data and privacy without our consent!” he wrote on the fundraising page.Another activist from Tennessee has raised more than $152,000 from more than 9,800 people.A bill on its way to President Trump’s desk would allow internet service providers (ISPs) to sell users’ data and Web browsing history. It has not taken effect, which means there is no growing history data yet to purchase.A Washington Post reporter also wrote it would be possible to buy the data “in theory, but probably not in reality.”A former enforcement bureau chief at the Federal Communications Commission told the newspaper that most internet service providers would cover up this information, under their privacy policies. If they did sell any individual's personal data in violation of those policies, a state attorney general could take the ISPs to court.
Gary Edwards

Windows XP: How end of support sparked one organisation's shift from Microsoft | ZDNet - 1 views

  •  
    Good story of how a UK Company responded to Microsoft's announcement if XP end of life. After examining many alternatives, they settled on a ChromeBook-ChromeBox - Citrix solution. Most of the existing desktop hardware was repurposed as ChromeTops running Chrome Browser apps and Citrix XenDesktop for legacy data apps. excerpt/intro: "There are the XP diehards, and the Windows 7 and 8 migrators. But in a world facing up to the end of Windows XP support, one UK organisation belongs to another significant group - those breaking with Microsoft as their principal OS provider. Microsoft's end of routine security patching and software updates on 8 April helped push the London borough of Barking and Dagenham to a decision it might otherwise not have taken over the fate of its 3,500 Windows XP desktops and 800 laptops. "They were beginning to creak but they would have gone on for a while. It's fair to say if XP wasn't going out of life, we probably wouldn't be doing this now," Barking and Dagenham general manager IT Sheyne Lucock said. Around one-eighth of corporate Windows XP users are moving away from Microsoft, according to recent Tech Pro Research. Lucock said it had become clear that the local authority was locked into a regular Windows operating system refresh cycle that it could no longer afford. "If we just replaced all the Windows desktops with newer versions running a newer version of Windows, four years later we would have to do the same again and so on," he said. "So there was an inclination to try and do something different - especially as we know that with all the budget challenges that local government is going to be faced with, we're going to have to halve the cost of our ICT service over the next five years." Barking and Dagenham outsourced its IT in December 2010 to Elevate East London, which is a joint-venture between the council and services firm Agilisys. Lucock and systems architect Rupert Hay-Campbell are responsible for strategy, policy
  •  
    Meanwhile, some organizations missed the end of life deadline and are now paying Microsoft for extended support. E.g., the U.S. Internal Revenue Service, which is still running 58,000 desktops on WinXP. http://arstechnica.com/information-technology/2014/04/irs-another-windows-xp-laggard-will-pay-microsoft-for-patches/
Paul Merrell

New open-source router firmware opens your Wi-Fi network to strangers | Ars Technica - 0 views

  • We’ve often heard security folks explain their belief that one of the best ways to protect Web privacy and security on one's home turf is to lock down one's private Wi-Fi network with a strong password. But a coalition of advocacy organizations is calling such conventional wisdom into question. Members of the “Open Wireless Movement,” including the Electronic Frontier Foundation (EFF), Free Press, Mozilla, and Fight for the Future are advocating that we open up our Wi-Fi private networks (or at least a small slice of our available bandwidth) to strangers. They claim that such a random act of kindness can actually make us safer online while simultaneously facilitating a better allocation of finite broadband resources. The OpenWireless.org website explains the group’s initiative. “We are aiming to build technologies that would make it easy for Internet subscribers to portion off their wireless networks for guests and the public while maintaining security, protecting privacy, and preserving quality of access," its mission statement reads. "And we are working to debunk myths (and confront truths) about open wireless while creating technologies and legal precedent to ensure it is safe, private, and legal to open your network.”
  • One such technology, which EFF plans to unveil at the Hackers on Planet Earth (HOPE X) conference next month, is open-sourced router firmware called Open Wireless Router. This firmware would enable individuals to share a portion of their Wi-Fi networks with anyone nearby, password-free, as Adi Kamdar, an EFF activist, told Ars on Friday. Home network sharing tools are not new, and the EFF has been touting the benefits of open-sourcing Web connections for years, but Kamdar believes this new tool marks the second phase in the open wireless initiative. Unlike previous tools, he claims, EFF’s software will be free for all, will not require any sort of registration, and will actually make surfing the Web safer and more efficient.
  • Kamdar said that the new firmware utilizes smart technologies that prioritize the network owner's traffic over others', so good samaritans won't have to wait for Netflix to load because of strangers using their home networks. What's more, he said, "every connection is walled off from all other connections," so as to decrease the risk of unwanted snooping. Additionally, EFF hopes that opening one’s Wi-Fi network will, in the long run, make it more difficult to tie an IP address to an individual. “From a legal perspective, we have been trying to tackle this idea that law enforcement and certain bad plaintiffs have been pushing, that your IP address is tied to your identity. Your identity is not your IP address. You shouldn't be targeted by a copyright troll just because they know your IP address," said Kamdar.
  • ...1 more annotation...
  • While the EFF firmware will initially be compatible with only one specific router, the organization would like to eventually make it compatible with other routers and even, perhaps, develop its own router. “We noticed that router software, in general, is pretty insecure and inefficient," Kamdar said. “There are a few major players in the router space. Even though various flaws have been exposed, there have not been many fixes.”
Gonzalo San Gil, PhD.

The 9 Best Linux Distros (alphabetically) - Datamation - 0 views

  •  
    "The best Linux distro for you may not be the best Linux distro for another user. Many Linux users are distro-hoppers, regularly moving from distribution to distribution. Some may be looking for the perfect distro, while others are simply curious about the latest Linux developments. Which distributions should distro-hoppers be looking at? While that depends what they are looking for, those looking for reliable distros for everyday use should be looking at. Here are the best Linux distros, in alphabetical order. "
Gary Edwards

Does "A VC" have a blind spot for Apple? « counternotions on Flash, WebKit an... - 0 views

  •  
    Flash versus Open: Perhaps one thing we can all agree on is that the future of the web, mobile or otherwise, will be more or less open. That would be HTML, MP3, H.264, HE-AAC, and so on. These are not propriatery Adobe products, they are open standards…unlike Flash. In confusing codecs with UI, Wilson keeps asking, "why is it tha[t] most streaming audio and video on the web comes through flash players and not html5 based players?" The answer is rather pedestrian: HTML5 is just ramping up, but Flash IDE has been around for many years. Selling Flash IDE and back-end server tools has been a commercial focus for Adobe, while Apple, for example, hasn't paid much attention to QuickTime technologies and promotion in ages. It's thus reflected in adoption patterns. Hopefully, this summary will clear Wilson's blind spot: Apple is betting on open technologies (as it makes money on hardware) while Adobe (which only sells software) is betting on wrapping up content in a proprietary shackle called Flash.
Gary Edwards

Mashups turn into an industry as offerings mature | Hinchcliffe Enterprise Web 2.0 | Z... - 0 views

  •  
    Dion has lots to say about the recent Web 2.0 Conference. In this article he covers nine significant announcements from companies specializing in Web based mashups and the related tools for building ad hoc Web applications. This years Web 2.0 was filled with Web developer oriented services, but my favorite was MindTouch. Perhaps because their focus was that of directly engaging end users in the customization of business processes. Yes, the creation of data objects is clearly in the realm of trained developers. And for sure many tools were announced at Web 2.0 to further the much needed wiring of data objects. But once wired and available, services like MindTouch i think will become the way end users interact and create new business productivity methods. Great coverage.

    "...... For awareness and understanding of the fast-growing world of mashups are significant challenges as IT practitioners, business strategists, and software vendors attempt to grapple with what's facing up to be the biggest challenge of all: The habits and expectations of the larger part of a generation of workers who don't yet realize mashups are poised to change many things about the software landscape on the Web and in the workplace. Generational changes can be difficult for businesses to embrace successfully, and while evidence that mashups are remaking the business world are still very much emerging, they certainly hold the promise..."

    ".... while the life of the average Web developer has been greatly improved by the availability of a wide variety of useful open APIs, the average user of the Web hasn't been a direct beneficiary except through the increase in Web apps that are built on the mashup model. And that's because the tools that empower users to weave together existing Web parts and open APIs into the exact solutions they need are just now becoming easy enough and robust enough to readily enable these scenarios. And that doesn't include the variety of
Gonzalo San Gil, PhD.

German Regulator Rejects German Newspapers' Cynical Attempt To Demand Cash From Google ... - 0 views

  •  
    "from the nice-try-but-no dept Back in June we wrote about the ridiculous and cynical attempt by a number of big German newspaper publishers, in the form of the industry group VG Media, to demand 11% of Google's gross worldwide revenue on any search that results in Google showing a snippet of their content. We noted the hypocrisy of these publishers seeking to do this while"
  •  
    "from the nice-try-but-no dept Back in June we wrote about the ridiculous and cynical attempt by a number of big German newspaper publishers, in the form of the industry group VG Media, to demand 11% of Google's gross worldwide revenue on any search that results in Google showing a snippet of their content. We noted the hypocrisy of these publishers seeking to do this while"
  •  
    "from the nice-try-but-no dept Back in June we wrote about the ridiculous and cynical attempt by a number of big German newspaper publishers, in the form of the industry group VG Media, to demand 11% of Google's gross worldwide revenue on any search that results in Google showing a snippet of their content. We noted the hypocrisy of these publishers seeking to do this while"
Gonzalo San Gil, PhD.

No, Department of Justice, 80 Percent of Tor Traffic Is Not Child Porn | WIRED [# ! Via... - 0 views

  • The debate over online anonymity, and all the whistleblowers, trolls, anarchists, journalists and political dissidents it enables, is messy enough. It doesn’t need the US government making up bogus statistics about how much that anonymity facilitates child pornography.
  • he debate over online anonymity, and all the whistleblowers, trolls, anarchists, journalists and political dissidents it enables, is messy enough. It doesn’t need the US government making up bogus statistics about how much that anonymity facilitates child pornography. At the State of the Net conference in Washington on Tuesday, US assistant attorney general Leslie Caldwell discussed what she described as the dangers of encryption and cryptographic anonymity tools like Tor, and how those tools can hamper law enforcement. Her statements are the latest in a growing drumbeat of federal criticism of tech companies and software projects that provide privacy and anonymity at the expense of surveillance. And as an example of the grave risks presented by that privacy, she cited a study she said claimed an overwhelming majority of Tor’s anonymous traffic relates to pedophilia. “Tor obviously was created with good intentions, but it’s a huge problem for law enforcement,” Caldwell said in comments reported by Motherboard and confirmed to me by others who attended the conference. “We understand 80 percent of traffic on the Tor network involves child pornography.” That statistic is horrifying. It’s also baloney.
  • In a series of tweets that followed Caldwell’s statement, a Department of Justice flack said Caldwell was citing a University of Portsmouth study WIRED covered in December. He included a link to our story. But I made clear at the time that the study claimed 80 percent of traffic to Tor hidden services related to child pornography, not 80 percent of all Tor traffic. That is a huge, and important, distinction. The vast majority of Tor’s users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what’s often referred to as the “dark web,” use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software’s creators at the non-profit Tor Project. The University of Portsmouth study dealt exclusively with visits to hidden services. In contrast to Caldwell’s 80 percent claim, the Tor Project’s director Roger Dingledine pointed out last month that the study’s pedophilia findings refer to something closer to a single percent of Tor’s overall traffic.
  • ...1 more annotation...
  • So to whoever at the Department of Justice is preparing these talking points for public consumption: Thanks for citing my story. Next time, please try reading it.
  •  
    [# Via Paul Merrell's Diigo...] "That is a huge, and important, distinction. The vast majority of Tor's users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what's often referred to as the "dark web," use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software's creators at the non-profit Tor Project."
  •  
    [# Via Paul Merrell's Diigo...] "That is a huge, and important, distinction. The vast majority of Tor's users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what's often referred to as the "dark web," use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software's creators at the non-profit Tor Project."
Gonzalo San Gil, PhD.

U.S. Net Neutrality Has a Massive Copyright Loophole | TorrentFreak - 0 views

  •  
    # ! [... Fingers crossed…. ] " Ernesto on March 15, 2015 C: 0 Opinion After years of debating U.S. Internet subscribers now have Government regulated Net Neutrality. A huge step forward according to some, but the full order released a few days ago reveals some worrying caveats. While the rules prevent paid prioritization, they do very little to prevent BitTorrent blocking, the very issue that got the net neutrality debate started."
  •  
    # ! [... Fingers crossed…. ] " Ernesto on March 15, 2015 C: 0 Opinion After years of debating U.S. Internet subscribers now have Government regulated Net Neutrality. A huge step forward according to some, but the full order released a few days ago reveals some worrying caveats. While the rules prevent paid prioritization, they do very little to prevent BitTorrent blocking, the very issue that got the net neutrality debate started."
  •  
    # ! [... Fingers crossed…. ] " Ernesto on March 15, 2015 C: 0 Opinion After years of debating U.S. Internet subscribers now have Government regulated Net Neutrality. A huge step forward according to some, but the full order released a few days ago reveals some worrying caveats. While the rules prevent paid prioritization, they do very little to prevent BitTorrent blocking, the very issue that got the net neutrality debate started."
Paul Merrell

Cy Vance's Proposal to Backdoor Encrypted Devices Is Riddled With Vulnerabilities | Jus... - 0 views

  • Less than a week after the attacks in Paris — while the public and policymakers were still reeling, and the investigation had barely gotten off the ground — Cy Vance, Manhattan’s District Attorney, released a policy paper calling for legislation requiring companies to provide the government with backdoor access to their smartphones and other mobile devices. This is the first concrete proposal of this type since September 2014, when FBI Director James Comey reignited the “Crypto Wars” in response to Apple’s and Google’s decisions to use default encryption on their smartphones. Though Comey seized on Apple’s and Google’s decisions to encrypt their devices by default, his concerns are primarily related to end-to-end encryption, which protects communications that are in transit. Vance’s proposal, on the other hand, is only concerned with device encryption, which protects data stored on phones. It is still unclear whether encryption played any role in the Paris attacks, though we do know that the attackers were using unencrypted SMS text messages on the night of the attack, and that some of them were even known to intelligence agencies and had previously been under surveillance. But regardless of whether encryption was used at some point during the planning of the attacks, as I lay out below, prohibiting companies from selling encrypted devices would not prevent criminals or terrorists from being able to access unbreakable encryption. Vance’s primary complaint is that Apple’s and Google’s decisions to provide their customers with more secure devices through encryption interferes with criminal investigations. He claims encryption prevents law enforcement from accessing stored data like iMessages, photos and videos, Internet search histories, and third party app data. He makes several arguments to justify his proposal to build backdoors into encrypted smartphones, but none of them hold water.
  • Before addressing the major privacy, security, and implementation concerns that his proposal raises, it is worth noting that while an increase in use of fully encrypted devices could interfere with some law enforcement investigations, it will help prevent far more crimes — especially smartphone theft, and the consequent potential for identity theft. According to Consumer Reports, in 2014 there were more than two million victims of smartphone theft, and nearly two-thirds of all smartphone users either took no steps to secure their phones or their data or failed to implement passcode access for their phones. Default encryption could reduce instances of theft because perpetrators would no longer be able to break into the phone to steal the data.
  • Vance argues that creating a weakness in encryption to allow law enforcement to access data stored on devices does not raise serious concerns for security and privacy, since in order to exploit the vulnerability one would need access to the actual device. He considers this an acceptable risk, claiming it would not be the same as creating a widespread vulnerability in encryption protecting communications in transit (like emails), and that it would be cheap and easy for companies to implement. But Vance seems to be underestimating the risks involved with his plan. It is increasingly important that smartphones and other devices are protected by the strongest encryption possible. Our devices and the apps on them contain astonishing amounts of personal information, so much that an unprecedented level of harm could be caused if a smartphone or device with an exploitable vulnerability is stolen, not least in the forms of identity fraud and credit card theft. We bank on our phones, and have access to credit card payments with services like Apple Pay. Our contact lists are stored on our phones, including phone numbers, emails, social media accounts, and addresses. Passwords are often stored on people’s phones. And phones and apps are often full of personal details about their lives, from food diaries to logs of favorite places to personal photographs. Symantec conducted a study, where the company spread 50 “lost” phones in public to see what people who picked up the phones would do with them. The company found that 95 percent of those people tried to access the phone, and while nearly 90 percent tried to access private information stored on the phone or in other private accounts such as banking services and email, only 50 percent attempted contacting the owner.
  • ...8 more annotations...
  • Vance attempts to downplay this serious risk by asserting that anyone can use the “Find My Phone” or Android Device Manager services that allow owners to delete the data on their phones if stolen. However, this does not stand up to scrutiny. These services are effective only when an owner realizes their phone is missing and can take swift action on another computer or device. This delay ensures some period of vulnerability. Encryption, on the other hand, protects everyone immediately and always. Additionally, Vance argues that it is safer to build backdoors into encrypted devices than it is to do so for encrypted communications in transit. It is true that there is a difference in the threats posed by the two types of encryption backdoors that are being debated. However, some manner of widespread vulnerability will inevitably result from a backdoor to encrypted devices. Indeed, the NSA and GCHQ reportedly hacked into a database to obtain cell phone SIM card encryption keys in order defeat the security protecting users’ communications and activities and to conduct surveillance. Clearly, the reality is that the threat of such a breach, whether from a hacker or a nation state actor, is very real. Even if companies go the extra mile and create a different means of access for every phone, such as a separate access key for each phone, significant vulnerabilities will be created. It would still be possible for a malicious actor to gain access to the database containing those keys, which would enable them to defeat the encryption on any smartphone they took possession of. Additionally, the cost of implementation and maintenance of such a complex system could be high.
  • Privacy is another concern that Vance dismisses too easily. Despite Vance’s arguments otherwise, building backdoors into device encryption undermines privacy. Our government does not impose a similar requirement in any other context. Police can enter homes with warrants, but there is no requirement that people record their conversations and interactions just in case they someday become useful in an investigation. The conversations that we once had through disposable letters and in-person conversations now happen over the Internet and on phones. Just because the medium has changed does not mean our right to privacy has.
  • In addition to his weak reasoning for why it would be feasible to create backdoors to encrypted devices without creating undue security risks or harming privacy, Vance makes several flawed policy-based arguments in favor of his proposal. He argues that criminals benefit from devices that are protected by strong encryption. That may be true, but strong encryption is also a critical tool used by billions of average people around the world every day to protect their transactions, communications, and private information. Lawyers, doctors, and journalists rely on encryption to protect their clients, patients, and sources. Government officials, from the President to the directors of the NSA and FBI, and members of Congress, depend on strong encryption for cybersecurity and data security. There are far more innocent Americans who benefit from strong encryption than there are criminals who exploit it. Encryption is also essential to our economy. Device manufacturers could suffer major economic losses if they are prohibited from competing with foreign manufacturers who offer more secure devices. Encryption also protects major companies from corporate and nation-state espionage. As more daily business activities are done on smartphones and other devices, they may now hold highly proprietary or sensitive information. Those devices could be targeted even more than they are now if all that has to be done to access that information is to steal an employee’s smartphone and exploit a vulnerability the manufacturer was required to create.
  • Vance also suggests that the US would be justified in creating such a requirement since other Western nations are contemplating requiring encryption backdoors as well. Regardless of whether other countries are debating similar proposals, we cannot afford a race to the bottom on cybersecurity. Heads of the intelligence community regularly warn that cybersecurity is the top threat to our national security. Strong encryption is our best defense against cyber threats, and following in the footsteps of other countries by weakening that critical tool would do incalculable harm. Furthermore, even if the US or other countries did implement such a proposal, criminals could gain access to devices with strong encryption through the black market. Thus, only innocent people would be negatively affected, and some of those innocent people might even become criminals simply by trying to protect their privacy by securing their data and devices. Finally, Vance argues that David Kaye, UN Special Rapporteur for Freedom of Expression and Opinion, supported the idea that court-ordered decryption doesn’t violate human rights, provided certain criteria are met, in his report on the topic. However, in the context of Vance’s proposal, this seems to conflate the concepts of court-ordered decryption and of government-mandated encryption backdoors. The Kaye report was unequivocal about the importance of encryption for free speech and human rights. The report concluded that:
  • States should promote strong encryption and anonymity. National laws should recognize that individuals are free to protect the privacy of their digital communications by using encryption technology and tools that allow anonymity online. … States should not restrict encryption and anonymity, which facilitate and often enable the rights to freedom of opinion and expression. Blanket prohibitions fail to be necessary and proportionate. States should avoid all measures that weaken the security that individuals may enjoy online, such as backdoors, weak encryption standards and key escrows. Additionally, the group of intelligence experts that was hand-picked by the President to issue a report and recommendations on surveillance and technology, concluded that: [R]egarding encryption, the U.S. Government should: (1) fully support and not undermine efforts to create encryption standards; (2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and (3) increase the use of encryption and urge US companies to do so, in order to better protect data in transit, at rest, in the cloud, and in other storage.
  • The clear consensus among human rights experts and several high-ranking intelligence experts, including the former directors of the NSA, Office of the Director of National Intelligence, and DHS, is that mandating encryption backdoors is dangerous. Unaddressed Concerns: Preventing Encrypted Devices from Entering the US and the Slippery Slope In addition to the significant faults in Vance’s arguments in favor of his proposal, he fails to address the question of how such a restriction would be effectively implemented. There is no effective mechanism for preventing code from becoming available for download online, even if it is illegal. One critical issue the Vance proposal fails to address is how the government would prevent, or even identify, encrypted smartphones when individuals bring them into the United States. DHS would have to train customs agents to search the contents of every person’s phone in order to identify whether it is encrypted, and then confiscate the phones that are. Legal and policy considerations aside, this kind of policy is, at the very least, impractical. Preventing strong encryption from entering the US is not like preventing guns or drugs from entering the country — encrypted phones aren’t immediately obvious as is contraband. Millions of people use encrypted devices, and tens of millions more devices are shipped to and sold in the US each year.
  • Finally, there is a real concern that if Vance’s proposal were accepted, it would be the first step down a slippery slope. Right now, his proposal only calls for access to smartphones and devices running mobile operating systems. While this policy in and of itself would cover a number of commonplace devices, it may eventually be expanded to cover laptop and desktop computers, as well as communications in transit. The expansion of this kind of policy is even more worrisome when taking into account the speed at which technology evolves and becomes widely adopted. Ten years ago, the iPhone did not even exist. Who is to say what technology will be commonplace in 10 or 20 years that is not even around today. There is a very real question about how far law enforcement will go to gain access to information. Things that once seemed like merely science fiction, such as wearable technology and artificial intelligence that could be implanted in and work with the human nervous system, are now available. If and when there comes a time when our “smart phone” is not really a device at all, but is rather an implant, surely we would not grant law enforcement access to our minds.
  • Policymakers should dismiss Vance’s proposal to prohibit the use of strong encryption to protect our smartphones and devices in order to ensure law enforcement access. Undermining encryption, regardless of whether it is protecting data in transit or at rest, would take us down a dangerous and harmful path. Instead, law enforcement and the intelligence community should be working to alter their skills and tactics in a fast-evolving technological world so that they are not so dependent on information that will increasingly be protected by encryption.
Gonzalo San Gil, PhD.

Readers respond: "What do you love about Linux?" | Opensource.com - 0 views

  •  
    "Today marks 25 years of Linux, the most successful software ever. At LinuxCon this week, Jim Zemlin of the Linux Foundation spoke words of admiration, praise, and excitement from the keynote stage, saying "Linux at 25 is a big thing" and "You can better yourself while bettering others at the same time." To celebrate, we asked our readers what they love about Linux and rounded up 25 of their responses. Dive into the Linux love!"
  •  
    "Today marks 25 years of Linux, the most successful software ever. At LinuxCon this week, Jim Zemlin of the Linux Foundation spoke words of admiration, praise, and excitement from the keynote stage, saying "Linux at 25 is a big thing" and "You can better yourself while bettering others at the same time." To celebrate, we asked our readers what they love about Linux and rounded up 25 of their responses. Dive into the Linux love!"
Paul Merrell

Wikileaks Releases "NightSkies 1.2": Proof CIA Bugs "Factory Fresh" iPhones | Zero Hedge - 0 views

  • The latest leaks from WikiLeaks' Vault 7 is titled “Dark Matter” and claims that the CIA has been bugging “factory fresh” iPhones since at least 2008 through suppliers.
  • And here is the full press release from WikiLeaks: Today, March 23rd 2017, WikiLeaks releases Vault 7 "Dark Matter", which contains documentation for several CIA projects that infect Apple Mac Computer firmware (meaning the infection persists even if the operating system is re-installed) developed by the CIA's Embedded Development Branch (EDB). These documents explain the techniques used by CIA to gain 'persistence' on Apple Mac devices, including Macs and iPhones and demonstrate their use of EFI/UEFI and firmware malware.   Among others, these documents reveal the "Sonic Screwdriver" project which, as explained by the CIA, is a "mechanism for executing code on peripheral devices while a Mac laptop or desktop is booting" allowing an attacker to boot its attack software for example from a USB stick "even when a firmware password is enabled". The CIA's "Sonic Screwdriver" infector is stored on the modified firmware of an Apple Thunderbolt-to-Ethernet adapter.   "DarkSeaSkies" is "an implant that persists in the EFI firmware of an Apple MacBook Air computer" and consists of "DarkMatter", "SeaPea" and "NightSkies", respectively EFI, kernel-space and user-space implants.   Documents on the "Triton" MacOSX malware, its infector "Dark Mallet" and its EFI-persistent version "DerStake" are also included in this release. While the DerStake1.4 manual released today dates to 2013, other Vault 7 documents show that as of 2016 the CIA continues to rely on and update these systems and is working on the production of DerStarke2.0.   Also included in this release is the manual for the CIA's "NightSkies 1.2" a "beacon/loader/implant tool" for the Apple iPhone. Noteworthy is that NightSkies had reached 1.2 by 2008, and is expressly designed to be physically installed onto factory fresh iPhones. i.e the CIA has been infecting the iPhone supply chain of its targets since at least 2008.   While CIA assets are sometimes used to physically infect systems in the custody of a target it is likely that many CIA physical access attacks have infected the targeted organization's supply chain including by interdicting mail orders and other shipments (opening, infecting, and resending) leaving the United States or otherwise.
Paul Merrell

EU Committee Votes to Make All Smartphone Vendors Utilize a Standard Charger - HotHardware - 0 views

  • The EU has been known to make a lot of odd decisions when it comes to tech, such as forcing Microsoft's hand at including a "browser wheel" with its Windows OS, but this latest decision is one I think most people will agree with. One thing that's frustrating about different smartphones is the occasional requirement to use a different charger. More frustrating is actually losing one of these chargers, and being unable to charge your phone even though you might have 8 of another charger readily available.
  • While this decision would cut down on this happening, the focus is to cut down on waste. On Thursday, the EU's internal market and consumer protection committee voted on forcing smartphone vendors to adopt a standard charger, which common sense would imply means micro USB, given it's already featured on the majority of smartphones out there. The major exception is Apple, which deploys a Lightning connector with its latest iPhones. Apple already offers Lightning to micro USB cables, but again, those are only useful if you happen to own one, making a sudden loss of a charger all-the-more frustrating. While Lightning might offer some slight benefits, Apple implementing a micro USB connector instead would make situations like those a lot easier to deal with (I am sure a lot of us have multiple micro USB cables lying around). Even though this law was a success in the initial voting, the government group must still bring the proposal to the Council which will then lead to another vote being made in the Parliament. If it does end up passing, I have a gut feeling that Apple will modify only its European models to adhere to the law, while its worldwide models will remain with the Lightning connector. Or, Apple might be able to circumvent the law if it offers to include the micro USB cable in the box, essentially shipping the phone with that connector.
Gonzalo San Gil, PhD.

The "Internet Governance" Farce and its "Multi-stakeholder" Illusion | La Quadrature du... - 0 views

  •  
    by Jérémie Zimmermann For almost 15 years, "Internet Governance" meetings1 have been drawing attention and driving our imaginaries towards believing that consensual rules for the Internet could emerge from global "multi-stakeholder" discussions. A few days ahead of the "NETmundial" Forum in Sao Paulo it has become obvious that "Internet Governance" is a farcical way of keeping us busy and hiding a sad reality: Nothing concrete in these 15 years, not a single action, ever emerged from "multi-stakeholder" meetings, while at the same time, technology as a whole has been turned against its users, as a tool for surveillance, control and oppression.
  •  
    by Jérémie Zimmermann For almost 15 years, "Internet Governance" meetings1 have been drawing attention and driving our imaginaries towards believing that consensual rules for the Internet could emerge from global "multi-stakeholder" discussions. A few days ahead of the "NETmundial" Forum in Sao Paulo it has become obvious that "Internet Governance" is a farcical way of keeping us busy and hiding a sad reality: Nothing concrete in these 15 years, not a single action, ever emerged from "multi-stakeholder" meetings, while at the same time, technology as a whole has been turned against its users, as a tool for surveillance, control and oppression.
Paul Merrell

The FCC is about to kill the free Internet | PandoDaily - 0 views

  • The Federal Communications Commission is poised to ruin the free Internet on a technicality. The group is expected to introduce new net neutrality laws that would allow companies to pay for better access to consumers through deals similar to the one struck by Netflix and Comcast earlier this year. The argument is that those deals don’t technically fall under the net neutrality umbrella, so these new rules won’t apply to them even though they directly affect the Internet. At least the commission is being upfront about its disinterest in protecting the free Internet.
  • The Verge notes that the proposed rules will offer some protections to consumers: The Federal Communication Commission’s proposal for new net neutrality rules will allow internet service providers to charge companies for preferential treatment, effectively undermining the concept of net neutrality, according to The Wall Street Journal. The rules will reportedly allow providers to charge for preferential treatment so long as they offer that treatment to all interested parties on “commercially reasonable” terms, with the FCC will deciding whether the terms are reasonable on a case-by-case basis. Providers will not be able to block individual websites, however. The goal of net neutrality rules is to prevent service providers from discriminating between different content, allowing all types of data and all companies’ data to be treated equally. While it appears that outright blocking of individual services won’t be allowed, the Journal reports that some forms of discrimination will be allowed, though that will apparently not include slowing down websites.
  • Re/code summarizes the discontent with these proposed rules: Consumer groups have complained about that plan because they’re worried that Wheeler’s rules may not hold up in court either. A federal appeals court rejected two previous versions of net neutrality rules after finding fault in the FCC’s legal reasoning. During the latest smackdown, however, the court suggested that the FCC had some authority to impose net neutrality rules under a section of the law that gives the agency the ability to regulate the deployment of broadband lines. Internet activists would prefer that the FCC just re-regulate Internet lines under old rules designed for telephone networks, which they say would give the agency clear authority to police Internet lines. Wheeler has rejected that approach for now. Phone and cable companies, including Comcast, AT&T and Verizon, have vociferously fought that idea over the past few years.
  • ...2 more annotations...
  • The Chicago Tribune reports on the process directing these rules: The five-member regulatory commission may vote as soon as May to formally propose the rules and collect public comment on them. Virtually all large Internet service providers, such as Verizon Communications Inc. and Time Warner Cable Inc., have pledged to abide by the principles of open Internet reinforced by these rules. But critics have raised concerns that, without a formal rule, the voluntary pledges could be pulled back over time and also leave the door open for deals that would give unequal treatment to websites or services.
  • I wrote about the European Union’s attempts to defend the free Internet: The legislation is meant to provide access to online services ‘without discrimination, restriction or interference, independent of the sender, receiver, type, content, device, service or application.’ For example, ISPs would be barred from slowing down or ‘throttling’ the speed at which one service’s videos are delivered while allowing other services to stream at normal rates. To bastardize Gertrude Stein: a byte is a byte is a byte. Such restrictions would prevent deals like the one Comcast recently made with Netflix, which will allow the service’s videos to reach consumers faster than before. Comcast is also said to be in talks with Apple for a deal that would allow videos from its new streaming video service to reach consumers faster than videos from competitors. The Federal Communications Commission’s net neutrality laws don’t apply to those deals, according to FCC Chairman Tom Wheeler, so they are allowed to continue despite the threat they pose to the free Internet.
  •  
    Cute. Deliberately not using the authority the court of appeals said it could use to impose net neutrality. So Europe can have net neutrality but not in the U.S.
Gonzalo San Gil, PhD.

Record Labels Sue 'New' Grooveshark, Seize Domains | TorrentFreak - 0 views

  •  
    " Ernesto on May 15, 2015 C: 0 Breaking A few days after music streaming service Grooveshark shut down and settled with the major record labels, the site was 'resurrected' by unknown people. While the reincarnation bears more resemblance to a traditional MP3 search engine than Grooveshark, the labels are determined to bring it down."
  •  
    " Ernesto on May 15, 2015 C: 0 Breaking A few days after music streaming service Grooveshark shut down and settled with the major record labels, the site was 'resurrected' by unknown people. While the reincarnation bears more resemblance to a traditional MP3 search engine than Grooveshark, the labels are determined to bring it down."
1 - 20 of 336 Next › Last »
Showing 20 items per page