Skip to main content

Home/ Future of the Web/ Group items tagged long

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

Linux Kernel 4.1 Will Be the Next LTS Version - Softpedia - 0 views

  •  
    "Linux 4.1 is set to be a long-term support release According to a recent tweet from the Linux Foundation's LinuxLTSI account, it appears that the next LTS (Long Term Support) version of the Linux kernel will be 4.1, which is currently in development."
  •  
    "Linux 4.1 is set to be a long-term support release According to a recent tweet from the Linux Foundation's LinuxLTSI account, it appears that the next LTS (Long Term Support) version of the Linux kernel will be 4.1, which is currently in development."
Gonzalo San Gil, PhD.

Anti-Piracy Law Boosted Music Sales , Plunged Internet Traffic | TorrentFreak - 0 views

  •  
    " y Ernesto on May 9, 2014 C: 54 News A new study on the effects of the IPRED anti-piracy law in Sweden shows that the legislation increased music sales by 36 percent. At the same time, Internet traffic in the country dropped significantly. The results suggest that the law initially had the desired effect, but the researchers also note this didn't last long." [... The question remains, however, whether bankrupting people or throwing them in jail is the ideal strategy in the long run… ||]
  •  
    " y Ernesto on May 9, 2014 C: 54 News A new study on the effects of the IPRED anti-piracy law in Sweden shows that the legislation increased music sales by 36 percent. At the same time, Internet traffic in the country dropped significantly. The results suggest that the law initially had the desired effect, but the researchers also note this didn't last long." [... The question remains, however, whether bankrupting people or throwing them in jail is the ideal strategy in the long run… ||]
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

The Latest Rules on How Long NSA Can Keep Americans' Encrypted Data Look Too Familiar |... - 0 views

  • Does the National Security Agency (NSA) have the authority to collect and keep all encrypted Internet traffic for as long as is necessary to decrypt that traffic? That was a question first raised in June 2013, after the minimization procedures governing telephone and Internet records collected under Section 702 of the Foreign Intelligence Surveillance Act were disclosed by Edward Snowden. The issue quickly receded into the background, however, as the world struggled to keep up with the deluge of surveillance disclosures. The Intelligence Authorization Act of 2015, which passed Congress this last December, should bring the question back to the fore. It established retention guidelines for communications collected under Executive Order 12333 and included an exception that allows NSA to keep ‘incidentally’ collected encrypted communications for an indefinite period of time. This creates a massive loophole in the guidelines. NSA’s retention of encrypted communications deserves further consideration today, now that these retention guidelines have been written into law. It has become increasingly clear over the last year that surveillance reform will be driven by technological change—specifically by the growing use of encryption technologies. Therefore, any legislation touching on encryption should receive close scrutiny.
  • Section 309 of the intel authorization bill describes “procedures for the retention of incidentally acquired communications.” It establishes retention guidelines for surveillance programs that are “reasonably anticipated to result in the acquisition of [telephone or electronic communications] to or from a United States person.” Communications to or from a United States person are ‘incidentally’ collected because the U.S. person is not the actual target of the collection. Section 309 states that these incidentally collected communications must be deleted after five years unless they meet a number of exceptions. One of these exceptions is that “the communication is enciphered or reasonably believed to have a secret meaning.” This exception appears to be directly lifted from NSA’s minimization procedures for data collected under Section 702 of FISA, which were declassified in 2013. 
  • While Section 309 specifically applies to collection taking place under E.O. 12333, not FISA, several of the exceptions described in Section 309 closely match exceptions in the FISA minimization procedures. That includes the exception for “enciphered” communications. Those minimization procedures almost certainly served as a model for these retention guidelines and will likely shape how this new language is interpreted by the Executive Branch. Section 309 also asks the heads of each relevant member of the intelligence community to develop procedures to ensure compliance with new retention requirements. I expect those procedures to look a lot like the FISA minimization guidelines.
  • ...6 more annotations...
  • This language is broad, circular, and technically incoherent, so it takes some effort to parse appropriately. When the minimization procedures were disclosed in 2013, this language was interpreted by outside commentators to mean that NSA may keep all encrypted data that has been incidentally collected under Section 702 for at least as long as is necessary to decrypt that data. Is this the correct interpretation? I think so. It is important to realize that the language above isn’t just broad. It seems purposefully broad. The part regarding relevance seems to mirror the rationale NSA has used to justify its bulk phone records collection program. Under that program, all phone records were relevant because some of those records could be valuable to terrorism investigations and (allegedly) it isn’t possible to collect only those valuable records. This is the “to find a needle a haystack, you first have to have the haystack” argument. The same argument could be applied to encrypted data and might be at play here.
  • This exception doesn’t just apply to encrypted data that might be relevant to a current foreign intelligence investigation. It also applies to cases in which the encrypted data is likely to become relevant to a future intelligence requirement. This is some remarkably generous language. It seems one could justify keeping any type of encrypted data under this exception. Upon close reading, it is difficult to avoid the conclusion that these procedures were written carefully to allow NSA to collect and keep a broad category of encrypted data under the rationale that this data might contain the communications of NSA targets and that it might be decrypted in the future. If NSA isn’t doing this today, then whoever wrote these minimization procedures wanted to at least ensure that NSA has the authority to do this tomorrow.
  • There are a few additional observations that are worth making regarding these nominally new retention guidelines and Section 702 collection. First, the concept of incidental collection as it has typically been used makes very little sense when applied to encrypted data. The way that NSA’s Section 702 upstream “about” collection is understood to work is that technology installed on the network does some sort of pattern match on Internet traffic; say that an NSA target uses example@gmail.com to communicate. NSA would then search content of emails for references to example@gmail.com. This could notionally result in a lot of incidental collection of U.S. persons’ communications whenever the email that references example@gmail.com is somehow mixed together with emails that have nothing to do with the target. This type of incidental collection isn’t possible when the data is encrypted because it won’t be possible to search and find example@gmail.com in the body of an email. Instead, example@gmail.com will have been turned into some alternative, indecipherable string of bits on the network. Incidental collection shouldn’t occur because the pattern match can’t occur in the first place. This demonstrates that, when communications are encrypted, it will be much harder for NSA to search Internet traffic for a unique ID associated with a specific target.
  • This lends further credence to the conclusion above: rather than doing targeted collection against specific individuals, NSA is collecting, or plans to collect, a broad class of data that is encrypted. For example, NSA might collect all PGP encrypted emails or all Tor traffic. In those cases, NSA could search Internet traffic for patterns associated with specific types of communications, rather than specific individuals’ communications. This would technically meet the definition of incidental collection because such activity would result in the collection of communications of U.S. persons who aren’t the actual targets of surveillance. Collection of all Tor traffic would entail a lot of this “incidental” collection because the communications of NSA targets would be mixed with the communications of a large number of non-target U.S. persons. However, this “incidental” collection is inconsistent with how the term is typically used, which is to refer to over-collection resulting from targeted surveillance programs. If NSA were collecting all Tor traffic, that activity wouldn’t actually be targeted, and so any resulting over-collection wouldn’t actually be incidental. Moreover, greater use of encryption by the general public would result in an ever-growing amount of this type of incidental collection.
  • This type of collection would also be inconsistent with representations of Section 702 upstream collection that have been made to the public and to Congress. Intelligence officials have repeatedly suggested that search terms used as part of this program have a high degree of specificity. They have also argued that the program is an example of targeted rather than bulk collection. ODNI General Counsel Robert Litt, in a March 2014 meeting before the Privacy and Civil Liberties Oversight Board, stated that “there is either a misconception or a mischaracterization commonly repeated that Section 702 is a form of bulk collection. It is not bulk collection. It is targeted collection based on selectors such as telephone numbers or email addresses where there’s reason to believe that the selector is relevant to a foreign intelligence purpose.” The collection of Internet traffic based on patterns associated with types of communications would be bulk collection; more akin to NSA’s collection of phone records en mass than it is to targeted collection focused on specific individuals. Moreover, this type of collection would certainly fall within the definition of bulk collection provided just last week by the National Academy of Sciences: “collection in which a significant portion of the retained data pertains to identifiers that are not targets at the time of collection.”
  • The Section 702 minimization procedures, which will serve as a template for any new retention guidelines established for E.O. 12333 collection, create a large loophole for encrypted communications. With everything from email to Internet browsing to real-time communications moving to encrypted formats, an ever-growing amount of Internet traffic will fall within this loophole.
  •  
    Tucked into a budget authorization act in December without press notice. Section 309 (the Act is linked from the article) appears to be very broad authority for the NSA to intercept any form of telephone or other electronic information in bulk. There are far more exceptions from the five-year retention limitation than the encrypted information exception. When reading this, keep in mind that the U.S. intelligence community plays semantic games to obfuscate what it does. One of its word plays is that communications are not "collected" until an analyst looks at or listens to partiuclar data, even though the data will be searched to find information countless times before it becomes "collected." That searching was the major basis for a decision by the U.S. District Court in Washington, D.C. that bulk collection of telephone communications was unconstitutional: Under the Fourth Amendment, a "search" or "seizure" requiring a judicial warrant occurs no later than when the information is intercepted. That case is on appeal, has been briefed and argued, and a decision could come any time now. Similar cases are pending in two other courts of appeals. Also, an important definition from the new Intelligence Authorization Act: "(a) DEFINITIONS.-In this section: (1) COVERED COMMUNICATION.-The term ''covered communication'' means any nonpublic telephone or electronic communication acquired without the consent of a person who is a party to the communication, including communications in electronic storage."       
Gary Edwards

Skynet rising: Google acquires 512-qubit quantum computer; NSA surveillance to be turne... - 0 views

  •  
    "The ultimate code breakers" If you know anything about encryption, you probably also realize that quantum computers are the secret KEY to unlocking all encrypted files. As I wrote about last year here on Natural News, once quantum computers go into widespread use by the NSA, the CIA, Google, etc., there will be no more secrets kept from the government. All your files - even encrypted files - will be easily opened and read. Until now, most people believed this day was far away. Quantum computing is an "impractical pipe dream," we've been told by scowling scientists and "flat Earth" computer engineers. "It's not possible to build a 512-qubit quantum computer that actually works," they insisted. Don't tell that to Eric Ladizinsky, co-founder and chief scientist of a company called D-Wave. Because Ladizinsky's team has already built a 512-qubit quantum computer. And they're already selling them to wealthy corporations, too. DARPA, Northrup Grumman and Goldman Sachs In case you're wondering where Ladizinsky came from, he's a former employee of Northrup Grumman Space Technology (yes, a weapons manufacturer) where he ran a multi-million-dollar quantum computing research project for none other than DARPA - the same group working on AI-driven armed assault vehicles and battlefield robots to replace human soldiers. .... When groundbreaking new technology is developed by smart people, it almost immediately gets turned into a weapon. Quantum computing will be no different. This technology grants God-like powers to police state governments that seek to dominate and oppress the People.  ..... Google acquires "Skynet" quantum computers from D-Wave According to an article published in Scientific American, Google and NASA have now teamed up to purchase a 512-qubit quantum computer from D-Wave. The computer is called "D-Wave Two" because it's the second generation of the system. The first system was a 128-qubit computer. Gen two
  •  
    Normally, I'd be suspicious of anything published by Infowars because its editors are willing to publish really over the top stuff, but: [i] this is subject matter I've maintained an interest in over the years and I was aware that working quantum computers were imminent; and [ii] the pedigree on this particular information does not trace to Scientific American, as stated in the article. I've known Scientific American to publish at least one soothing and lengthy article on the subject of chlorinated dioxin hazard -- my specialty as a lawyer was litigating against chemical companies that generated dioxin pollution -- that was generated by known closet chemical industry advocates long since discredited and was totally lacking in scientific validity and contrary to established scientific knowledge. So publication in Scientific American doesn't pack a lot of weight with me. But checking the Scientific American linked article, notes that it was reprinted by permission from Nature, a peer-reviewed scientific journal and news organization that I trust much more. That said, the InfoWars version is a rewrite that contains lots of information not in the Nature/Scientific American version of a sensationalist nature, so heightened caution is still in order. Check the reprinted Nature version before getting too excited: "The D-Wave computer is not a 'universal' computer that can be programmed to tackle any kind of problem. But scientists have found they can usefully frame questions in machine-learning research as optimisation problems. "D-Wave has battled to prove that its computer really operates on a quantum level, and that it is better or faster than a conventional computer. Before striking the latest deal, the prospective customers set a series of tests for the quantum computer. D-Wave hired an outside expert in algorithm-racing, who concluded that the speed of the D-Wave Two was above average overall, and that it was 3,600 times faster than a leading conventional comput
Paul Merrell

Proposed changes to US data collection fall short of NSA reformers' goals | US news | T... - 0 views

  • The US intelligence community has delivered a limited list of tweaks to how long it can hold information on ordinary citizens and hide secret trawls for data, responding to Barack Obama’s call for reform of its surveillance practices in the wake of revelations about NSA practices. Published by the office of the director of national intelligence, James Clapper, just six days before a recently announced visit to Washington by the German chancellor, Angela Merkel, the report is the culmination of a year-long effort to respond to revelations by whistleblower Edward Snowden.
  • But the report does not appear to address the role of telecommunications companies in collecting metadata and the use of encryption to prevent hacking, and privacy critics were quick to pounce on a year of promises with little reform to show. “It’s hard to see much ‘there’ there,” Senator Ron Wyden said in a statement. “When it comes to reforming intelligence programs and protecting Americans’ privacy, there is much, much more work to be done.” The outline from the intelligence community also appears to fall short of the legislative changes attempted by campaigners in Congress, focusing instead on measures to tighten internal guidelines and provide foreigners with some of the protections allowed for US citizens. These measures include:
  • Other measures outlined in the new report include steps to clarify the protection given to whistleblowers if they follow internal rules and a requirement that “any significant compliance incident involving personal information, regardless of the person’s nationality” be reported to Clapper.
  • ...3 more annotations...
  • Limiting how long personal data gathered from non-US citizens can be held to five years, so long as it is deemed not relevant to ongoing intelligence investigations. Asking Congress to provide some foreign nationals access to legal redress if their private information has been wilfully disclosed by US intelligence agencies. Limiting to three years how long the FBI can prevent disclosure of its surveillance activities using so-called national security letters, unless a special agent deems otherwise.
  • The official results of Obama’s call for surveillance reform also appear to have failed to address encryption. The FBI director, James Comey, and other officials have been highly critical of the use of encryption by tech companies such as Apple to protect their users’ information. Comey has argued that stronger encryption, baked in to some technology after the Snowden revelations, will aid criminals and terrorists and shut out law enforcement.
  • The intelligence report itself acknowledges that further reforms called for by the president, such as ending the collection of bulk data by the government, have not been implemented, possibly due to stalled legislative efforts in Congress.
Paul Merrell

U.S. military closer to making cyborgs a reality - CNNPolitics.com - 0 views

  • The U.S. military is spending millions on an advanced implant that would allow a human brain to communicate directly with computers.If it succeeds, cyborgs will be a reality.The Pentagon's research arm, the Defense Advanced Research Projects Agency (DARPA), hopes the implant will allow humans to directly interface with computers, which could benefit people with aural and visual disabilities, such as veterans injured in combat.The goal of the proposed implant is to "open the channel between the human brain and modern electronics" according to DARPA's program manager, Phillip Alvelda.
  • DARPA sees the implant as providing a foundation for new therapies that could help people with deficits in sight or hearing by "feeding digital auditory or visual information into the brain."A spokesman for DARPA told CNN that the program is not intended for military applications.
  • But some experts see such an implant as having the potential for numerous applications, including military ones, in the field of wearable robotics -- which aims to augment and restore human performance.Conor Walsh, a professor of mechanical and biomedical engineering at Harvard University, told CNN that the implant would "change the game," adding that "in the future, wearable robotic devices will be controlled by implants."Walsh sees the potential for wearable robotic devices or exoskeletons in everything from helping a medical patient recover from a stroke to enhancing soldiers' capabilities in combat.The U.S. military is currently developing a battery-powered exoskeleton, the Tactical Assault Light Operator Suit, to provide superior protection from enemy fire and in-helmet technologies that boost the user's communications ability and vision.The suits' development is being overseen by U.S. Special Operations Command.In theory, the proposed neural implant would allow the military member operating the suit to more effectively control the armored exoskeleton while deployed in combat.
  • ...1 more annotation...
  • In its announcement, DARPA acknowledged that an implant is still a long ways away, with breakthroughs in neuroscience, synthetic biology, low-power electronics, photonics and medical-device manufacturing needed before the device could be used.DARPA plans to recruit a diverse set of experts in an attempt to accelerate the project's development, according to its statement announcing the project.
  •  
    Let's assume for the moment that DARPA's goal is realizable and brain implants for commuication with computers become common. How long will it take for FBI, NSA, et ilk to get legislation or a court order allowing them to conduct mass surveillance of people's brains? Not long, I suspect. 
Gary Edwards

Microsoft Office whips Google Docs: It's finally game over | Computerworld Blogs - 0 views

  •  
    "If there was ever any doubt about whether Microsoft or Google would win the war of office suites, there should be no longer. Within the last several weeks, Microsoft has pulled so far ahead that it's game over. Here's why. When it comes to which suite is more fully featured, there's never been any real debate: Microsoft Office wins hands down. Whether you're creating entire presentations, creating complicated word-processing documents, or even doing something as simple as handling text attributes, Office is a far better tool. Until the last few weeks, Google Docs had one significant advantage over Microsoft Office: It's available for Android and the iPad as well as PCs because it's Web-based. The same wasn't the case for Office. So if you wanted to use an office suite on all your mobile devices, Google Docs was the way to go. Google Docs lost that advantage when Microsoft released Office for the iPad. There's not yet a native version for Android tablets, but Microsoft is working on that, telling GeekWire, "Let me tell you conclusively: Yes, we are also building Android native applications for tablets for Word, Excel and PowerPoint." Google Docs is still superior to Office's Web-based version, but that's far less important than it used to be. There's no need to go with a Web-based office suite if a superior suite is available as a native apps on all platforms, mobile or otherwise. And Office's collaboration capabilities are quite considerable now. Of course, there's always the question of price. Google Docs is free. Microsoft Office isn't. But at $100 a year for up to five devices, or $70 a year for two, no one will be going broke paying for Microsoft Office. It's worth paying that relatively small price for a much better office suite. Google Docs won't die. It'll be around as second fiddle for a long time. But that's what it will always remain: a second fiddle to the better Microsoft Office."
  •  
    Google acquired "Writely", a small company in Portola Valley that pioneered document editing in a browser. Writely was perhaps the first cloud computing editor to go beyond simple HTML; eventually crafting some really cool CSS-JavaScript-JSON document layout and editing methods. But it can't edit native MSOffice documents. It converts them. There are more than a few problems with the Google Docs approach to editing advanced "compound" documents, but two stick out and are certain to give pause to anyone making the great transition from local workgroup computing, to the highly mobile, always connected, cloud computing. The first problem certain to become a show stopper is that Google converts documents to their native on-line format for editing and collaboration. And then they convert back. To many this isn't a problem. But if the document is part of a workflow or business process, conversion is a killer. There is an old saw affectionately known as "Reuters Law", dating back to the ODF-OXML document wars, that emphatically states; "Conversion breaks documents." The breakage includes both the visual layout of the document, and, the "compound" aspects and data connections that are internal to the document. Think of this way. A business document that is part of a legacy Windows Workgroup workflow is opened up in gDocs. Google converts the document for editing purposes. The data and the workflow internals that bind the document to the local business system are broken on conversion. The look of the document is also visually shredded as the gDocs layout engine is applied. For all practical purposes, no matter what magic editing and collaboration value is added, a broken document means a broken business process. Let me say that again, with the emphasis of having witnessed this first hand during the year long ODF transition trials the Commonwealth of Massachusetts conducted in 2005 and 2006. The business process broke every time a conversion was conducted "on a busines
Gonzalo San Gil, PhD.

Mobile security: iOS 8 vs. Android 5 vs. BlackBerry vs. Windows Phone [# ! x BoS...]] - 0 views

  •  
    "Apple's iPhone and iPad long ago pushed out the BlackBerry as the corporate standard for mobile devices, in all but the highest-security environments. Google -- whose Android platform reigns outside the corporate world -- is now trying to push out Apple, with a new effort called Android for Work."
  •  
    "Apple's iPhone and iPad long ago pushed out the BlackBerry as the corporate standard for mobile devices, in all but the highest-security environments. Google -- whose Android platform reigns outside the corporate world -- is now trying to push out Apple, with a new effort called Android for Work."
  •  
    "Apple's iPhone and iPad long ago pushed out the BlackBerry as the corporate standard for mobile devices, in all but the highest-security environments. Google -- whose Android platform reigns outside the corporate world -- is now trying to push out Apple, with a new effort called Android for Work."
Paul Merrell

Victory for Users: Librarian of Congress Renews and Expands Protections for Fair Uses |... - 0 views

  • The new rules for exemptions to copyright's DRM-circumvention laws were issued today, and the Librarian of Congress has granted much of what EFF asked for over the course of months of extensive briefs and hearings. The exemptions we requested—ripping DVDs and Blurays for making fair use remixes and analysis; preserving video games and running multiplayer servers after publishers have abandoned them; jailbreaking cell phones, tablets, and other portable computing devices to run third party software; and security research and modification and repairs on cars—have each been accepted, subject to some important caveats.
  • The exemptions are needed thanks to a fundamentally flawed law that forbids users from breaking DRM, even if the purpose is a clearly lawful fair use. As software has become ubiquitous, so has DRM.  Users often have to circumvent that DRM to make full use of their devices, from DVDs to games to smartphones and cars. The law allows users to request exemptions for such lawful uses—but it doesn’t make it easy. Exemptions are granted through an elaborate rulemaking process that takes place every three years and places a heavy burden on EFF and the many other requesters who take part. Every exemption must be argued anew, even if it was previously granted, and even if there is no opposition. The exemptions that emerge are limited in scope. What is worse, they only apply to end users—the people who are actually doing the ripping, tinkering, jailbreaking, or research—and not to the people who make the tools that facilitate those lawful activities. The section of the law that creates these restrictions—the Digital Millennium Copyright Act's Section 1201—is fundamentally flawed, has resulted in myriad unintended consequences, and is long past due for reform or removal altogether from the statute books. Still, as long as its rulemaking process exists, we're pleased to have secured the following exemptions.
  • The new rules are long and complicated, and we'll be posting more details about each as we get a chance to analyze them. In the meantime, we hope each of these exemptions enable more exciting fair uses that educate, entertain, improve the underlying technology, and keep us safer. A better long-terms solution, though, is to eliminate the need for this onerous rulemaking process. We encourage lawmakers to support efforts like the Unlocking Technology Act, which would limit the scope of Section 1201 to copyright infringements—not fair uses. And as the White House looks for the next Librarian of Congress, who is ultimately responsible for issuing the exemptions, we hope to get a candidate who acts—as a librarian should—in the interest of the public's access to information.
Gonzalo San Gil, PhD.

The Doomed Quest For The Golden Key | TechCrunch - 0 views

  •  
    " ... The genie of strong encryption is long, long out of the proverbial bottle. Earlier this week, Open WhisperSystems released Signal 2.0 for iOS, offering free, cross-platform, extremely secure end-to-end-encrypted voice calls and text messages to anyone with either an Android or an iPhone. What's more, all of their code is open-source; anyone can roll their own customized version. ..."
Paul Merrell

China's quantum satellite enables first totally secure long-range messages - 2 views

  • In the middle of the night, invisible to anyone but special telescopes in two Chinese observatories, satellite Micius sends particles of light to Earth to establish the world’s most secure communication link. Named after the ancient Chinese philosopher also known as Mozi, Micius is the world’s first quantum communications satellite and has, for several years, been at the forefront of quantum encryption. Scientists have now reported using this technology to reach a major milestone: long-range secure communication you could trust even without trusting the satellite it runs through. Launched in 2016, Micius has already produced a number of breakthroughs under its operating team led by Pan Jian-Wei, China’s “Father of Quantum”. The satellite serves as the source of pairs of entangled photons, twinned light particles whose properties remain intertwined no matter how far apart they are. If you manipulate one of the photons, the other will be similarly affected at the very same moment.
  • It is this property that lies in the heart of the most secure forms of quantum cryptography, the entanglement-based quantum key distribution. If you use one of the entangled particles to create a key for encoding messages, only the person with the other particle can decode them.
  • Secure long-distance links such as this one will be the foundation of the quantum internet, the future global network with added security powered by laws of quantum mechanics, unmatched by classical cryptographic methods. The launch of Micius and the records set by the scientists and engineers building quantum communication systems with its help have been compared to the effect Sputnik had on the space race in the 20th century. In a similar way, the quantum race has political and military implications that are hard to ignore.
Gonzalo San Gil, PhD.

Hollywood Withdraws Funding for UK Anti-Piracy Group FACT - TorrentFreak - 0 views

    • Gonzalo San Gil, PhD.
       
      # ! Quit Witch Hunts funding and invest in new Media poolicies...
  •  
    " Andy on May 24, 2016 C: 33 Breaking The UK's Federation Against Copyright Theft has received a major blow after the Motion Picture Association advised the anti-piracy group it will not renew its membership. The termination of the 30-year long relationship means that FACT will lose 50% of its budget and the backing of the six major Hollywood movie studios."
  •  
    " Andy on May 24, 2016 C: 33 Breaking The UK's Federation Against Copyright Theft has received a major blow after the Motion Picture Association advised the anti-piracy group it will not renew its membership. The termination of the 30-year long relationship means that FACT will lose 50% of its budget and the backing of the six major Hollywood movie studios."
Paul Merrell

What to Do About Lawless Government Hacking and the Weakening of Digital Security | Ele... - 0 views

  • In our society, the rule of law sets limits on what government can and cannot do, no matter how important its goals. To give a simple example, even when chasing a fleeing murder suspect, the police have a duty not to endanger bystanders. The government should pay the same care to our safety in pursuing threats online, but right now we don’t have clear, enforceable rules for government activities like hacking and "digital sabotage." And this is no abstract question—these actions increasingly endanger everyone’s security
  • The problem became especially clear this year during the San Bernardino case, involving the FBI’s demand that Apple rewrite its iOS operating system to defeat security features on a locked iPhone. Ultimately the FBI exploited an existing vulnerability in iOS and accessed the contents of the phone with the help of an "outside party." Then, with no public process or discussion of the tradeoffs involved, the government refused to tell Apple about the flaw. Despite the obvious fact that the security of the computers and networks we all use is both collective and interwoven—other iPhones used by millions of innocent people presumably have the same vulnerability—the government chose to withhold information Apple could have used to improve the security of its phones. Other examples include intelligence activities like Stuxnet and Bullrun, and law enforcement investigations like the FBI’s mass use of malware against Tor users engaged in criminal behavior. These activities are often disproportionate to stopping legitimate threats, resulting in unpatched software for millions of innocent users, overbroad surveillance, and other collateral effects.  That’s why we’re working on a positive agenda to confront governmental threats to digital security. Put more directly, we’re calling on lawyers, advocates, technologists, and the public to demand a public discussion of whether, when, and how governments can be empowered to break into our computers, phones, and other devices; sabotage and subvert basic security protocols; and stockpile and exploit software flaws and vulnerabilities.  
  • Smart people in academia and elsewhere have been thinking and writing about these issues for years. But it’s time to take the next step and make clear, public rules that carry the force of law to ensure that the government weighs the tradeoffs and reaches the right decisions. This long post outlines some of the things that can be done. It frames the issue, then describes some of the key areas where EFF is already pursuing this agenda—in particular formalizing the rules for disclosing vulnerabilities and setting out narrow limits for the use of government malware. Finally it lays out where we think the debate should go from here.   
  •  
    "In our society, the rule of law sets limits on what government can and cannot do, no matter how important its goals. "
  •  
    It's not often that I disagree with EFF's positions, but on this one I do. The government should be prohibited from exploiting computer vulnerabilities and should be required to immediately report all vulnerabilities discovered to the relevant developers of hardware or software. It's been one long slippery slope since the Supreme Court first approved wiretapping in Olmstead v. United States, 277 US 438 (1928), https://goo.gl/NJevsr (.) Left undecided to this day is whether we have a right to whisper privately, a right that is undeniable. All communications intercept cases since Olmstead fly directly in the face of that right.
Gonzalo San Gil, PhD.

IDG Connect - Friday Rant: The Internet Is Broken - 0 views

  •  
    " Posted by Alex Cruickshank on May 30 2014 The internet has come a long way in 20 years. The infrastructure is older than that, of course, but it was 1994 when the amazing new World Wide Web started to make a serious impression on me and my colleagues and friends. Firing up Netscape Navigator 1.0 on Windows 3.11 with a wobbly TCP/IP stack and a 14.4kbps modem, typing in a cryptic URL and seeing information from the other side of the world, instantly! It was an incredible experience."
  •  
    " Posted by Alex Cruickshank on May 30 2014 The internet has come a long way in 20 years. The infrastructure is older than that, of course, but it was 1994 when the amazing new World Wide Web started to make a serious impression on me and my colleagues and friends. Firing up Netscape Navigator 1.0 on Windows 3.11 with a wobbly TCP/IP stack and a 14.4kbps modem, typing in a cryptic URL and seeing information from the other side of the world, instantly! It was an incredible experience."
Gonzalo San Gil, PhD.

Apple will face $350M trial over iPod DRM | Ars Technica - 1 views

  •  
    "Apple's DRM schemes have been long disliked by activists. But are they illegal?"
  •  
    "Apple's DRM schemes have been long disliked by activists. But are they illegal?"
Gonzalo San Gil, PhD.

What is good reference management software on Linux - Xmodulo - 1 views

  •  
    "Last updated on October 10, 2014 Authored by Adrien Brochard 3 Comments Have you ever written a paper so long that you thought you would never see the end of it? If so, you know that the worst part is not dedicating hours on it, but rather that once you are done, you still have to order and format your references into a structured convention-following bibliography."
  •  
    "Last updated on October 10, 2014 Authored by Adrien Brochard 3 Comments Have you ever written a paper so long that you thought you would never see the end of it? If so, you know that the worst part is not dedicating hours on it, but rather that once you are done, you still have to order and format your references into a structured convention-following bibliography."
Gonzalo San Gil, PhD.

Grooveshark is Dead :( 01-05-15 - 0 views

  •  
    Long Life to Grooveshark
  •  
    Long Life to Grooveshark
Gonzalo San Gil, PhD.

EU digital single market: Death by compromise - POLITICO (*) - 0 views

  •  
    By Ryan Heath and Zoya Sheftalovich 6/5/15, 5:30 AM CET Updated 6/5/15, 12:07 PM CET The long-awaited, much-ballyhooed Digital Single Market strategy is set to be published at noon Wednesday by the European Commission. Reaction will be quick, loud and vociferous, but look for clues to the answer to one key question: Will this document really change anything? [*The structure of Media supply/demand keeps on being vertical: Users will only access what Big Companies offer. There must be a way -via watermarking, perhaps- to allow people to consume whatever they want, and fairly monetize it later... If not, the contents will be keep restricted to editors will: that is censorship and restrictions in te age of abundance and freedom] "A user's guide to the Commission's latest brainstorm. By Ryan Heath and Zoya Sheftalovich 6/5/15, 5:30 AM CET Updated 6/5/15, 12:07 PM CET"
  •  
    By Ryan Heath and Zoya Sheftalovich 6/5/15, 5:30 AM CET Updated 6/5/15, 12:07 PM CET The long-awaited, much-ballyhooed Digital Single Market strategy is set to be published at noon Wednesday by the European Commission. Reaction will be quick, loud and vociferous, but look for clues to the answer to one key question: Will this document really change anything? [*The structure of Media supply/demand keeps on being vertical: Users will only access what Big Companies offer. There must be a way -via watermarking, perhaps- to allow people to consume whatever they want, and fairly monetize it later... If not, the contents will be keep restricted to editors will: that is censorship and restrictions in te age of abundance and freedom] "A user's guide to the Commission's latest brainstorm. By Ryan Heath and Zoya Sheftalovich 6/5/15, 5:30 AM CET Updated 6/5/15, 12:07 PM CET"
Gonzalo San Gil, PhD.

Legal Scholars Warn Against 10 Year Prison for Online Pirates - TorrentFreak - 0 views

  •  
    " Ernesto on August 15, 2015 C: 70 News Legal experts and activists are protesting a UK Government proposal to increase the maximum jail term for online piracy from two to ten years. The proposed extension is disproportionate, ineffective and puts casual file-sharers at risk of long jail sentences, they argue."
  •  
    " Ernesto on August 15, 2015 C: 70 News Legal experts and activists are protesting a UK Government proposal to increase the maximum jail term for online piracy from two to ten years. The proposed extension is disproportionate, ineffective and puts casual file-sharers at risk of long jail sentences, they argue."
1 - 20 of 181 Next › Last »
Showing 20 items per page