Skip to main content

Home/ Future of the Web/ Group items tagged due-process

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

U.S. Court Grants Order to Wipe Pirate Sites from the Internet | TorrentFreak - 1 views

  •  
    "... The preliminary injunction is unique in its kind, both due to its broadness and the fact that it happened without due process. This has several experts worried, including EFF's Intellectual Property Director Corynne McSherry. "It's very worrisome that a court would issue a rapid and broad order affecting speech based on allegations, without careful consideration and an opportunity for the targets to defend themselves," McSherry tells TorrentFreak."
  •  
    "... The preliminary injunction is unique in its kind, both due to its broadness and the fact that it happened without due process. This has several experts worried, including EFF's Intellectual Property Director Corynne McSherry. "It's very worrisome that a court would issue a rapid and broad order affecting speech based on allegations, without careful consideration and an opportunity for the targets to defend themselves," McSherry tells TorrentFreak."
Paul Merrell

6 Anti-NSA Technological innovations that May Just Change the World | StormCloudsGathering - 2 views

  • Rather than grovel and beg for the U.S. government to respect our privacy, these innovators have taken matters into their own hands, and their work may change the playing field completely.
  • People used to assume that the United States government was held in check by the constitution, which prohibits unreasonable searches and seizures and which demands due process in criminal investigations, but such illusions have evaporated in recent years. It turns out that the NSA considers itself above the law in every respect and feels entitled to spy on anyone anywhere in the world without warrants, and without any real oversight. Understandably these revelations shocked the average citizen who had been conditioned to take the government's word at face value, and the backlash has been considerable. The recent "Today We Fight Back" campaign to protest the NSA's surveillance practices shows that public sentiment is in the right place. Whether these kinds of petitions and protests will have any real impact on how the U.S. government operates is questionable (to say the least), however some very smart people have decided not to wait around and find out. Instead they're focusing on making the NSA's job impossible. In the process they may fundamentally alter the way the internet operates.
  • People used to assume that the United States government was held in check by the constitution, which prohibits unreasonable searches and seizures and which demands due process in criminal investigations, but such illusions have evaporated in recent years. It turns out that the NSA considers itself above the law in every respect and feels entitled to spy on anyone anywhere in the world without warrants, and without any real oversight. Understandably these revelations shocked the average citizen who had been conditioned to take the government's word at face value, and the backlash has been considerable. The recent "Today We Fight Back" campaign to protest the NSA's surveillance practices shows that public sentiment is in the right place. Whether these kinds of petitions and protests will have any real impact on how the U.S. government operates is questionable (to say the least), however some very smart people have decided not to wait around and find out. Instead they're focusing on making the NSA's job impossible. In the process they may fundamentally alter the way the internet operates.
Paul Merrell

Civil Society Groups Ask Facebook To Provide Method To Appeal Censorship | PopularResis... - 0 views

  • EFF, Human Rights Watch, and Over 70 Civil Society Groups Ask Mark Zuckerberg to Provide All Users with Mechanism to Appeal Content Censorship on Facebook World’s Freedom of Expression Is In Your Hands, Groups Tell CEO San Francisco—The Electronic Frontier Foundation (EFF) and more than 70 human and digital rights groups called on Mark Zuckerberg today to add real transparency and accountability to Facebook’s content removal process. Specifically, the groups demand that Facebook clearly explain how much content it removes, both rightly and wrongly, and provide all users with a fair and timely method to appeal removals and get their content back up. While Facebook is under enormous—and still mounting—pressure to remove material that is truly threatening, without transparency, fairness, and processes to identify and correct mistakes, Facebook’s content takedown policies too often backfire and silence the very people that should have their voices heard on the platform.  Politicians, museums, celebrities, and other high profile groups and individuals whose improperly removed content can garner media attention seem to have little trouble reaching Facebook to have content restored—they sometimes even receive an apology. But the average user? Not so much. Facebook only allows people to appeal content decisions in a limited set of circumstances, and in many cases, users have absolutely no option to appeal. Onlinecensorship.org, an EFF project for users to report takedown notices, has collected reports of hundreds of unjustified takedown incidents where appeals were unavailable. For most users, content Facebook removes is rarely restored, and some are banned from the platform for no good reason. EFF, Article 19, the Center for Democracy and Technology, and Ranking Digital Rights wrote directly to Mark Zuckerberg today demanding that Facebook implement common sense standards so that average users can easily appeal content moderation decisions, receive prompt replies and timely review by a human or humans, and have the opportunity to present evidence during the review process. The letter was co-signed by more than 70 human rights, digital rights, and civil liberties organizations from South America, Europe, the Middle East, Asia, Africa, and the U.S.
Gonzalo San Gil, PhD.

Takedown, Staydown Would Be a Disaster, Internet Archive Warns - TorrentFreak - 0 views

  •  
    " By Andy on June 7, 2016 C: 100 News The Internet Archive has issued its sternest warning yet over proposed changes to the DMCA. The Archive says that 'Notice and Staydown' would be an "absolute disaster" for the Internet that would trample due process, promote user monitoring, censorship, and have First Amendment implications."
Paul Merrell

Senate narrowly rejects new FBI surveillance | TheHill - 0 views

  • The Senate narrowly rejected expanding the FBI's surveillance powers Wednesday in the wake of the worst mass shooting in U.S. history.  Senators voted 58-38 on a procedural hurdle, with 60 votes needed to move forward. Majority Leader Mitch McConnellMitch McConnellOvernight Finance: Wall Street awaits Brexit result | Clinton touts biz support | New threat to Puerto Rico bill? | Dodd, Frank hit back The Trail 2016: Berning embers McConnell quashes Senate effort on guns MORE, who initially voted "yes," switched his vote, which allows him to potentially bring the measure back up. 
  • The Senate GOP proposal—being offered as an amendment to the Commerce, Justice and Science appropriations bill—would allow the FBI to use "national security letters" to obtain people's internet browsing history and other information without a warrant during a terrorism or federal intelligence probe.  It would also permanently extend a Patriot Act provision — currently set to expire in 2019 — meant to monitor "lone wolf" extremists.  Senate Republicans said they would likely be able to get enough votes if McConnell schedules a redo.
  • Asked if he anticipates supporters will be able to get 60 votes, Sen. John CornynJohn CornynSenate to vote on two gun bills Senate Dems rip GOP on immigration ruling Post Orlando, hawks make a power play MORE (R-Texas) separately told reporters "that's certainly my expectation." McConnell urged support for the proposal earlier Wednesday, saying it would give the FBI to "connect the dots" in terrorist investigations.  "We can focus on defeating [the Islamic State in Iraq and Syria] or we can focus on partisan politics. Some of our colleagues many think this is all some game," he said. "I believe this is a serious moment that calls for serious solutions."  But Democrats—and some Republicans—raised concerns that the changes didn't go far enough to ensure Americans' privacy.  Sen. Ron WydenRon WydenPost Orlando, hawks make a power play Democrats seize spotlight with sit-in on guns Democrats stage sit-in on House floor to push for gun vote MORE (D-Ore.) blasted his colleagues for "hypocrisy" after a gunman killed 49 people and injured dozens more during the mass shooting in Orlando, Fla. "Due process ought to apply as it relates to guns, but due process wouldn't apply as it relates to the internet activity of millions of Americans," he said ahead of Wednesday's vote. "Supporters of this amendment...have suggested that Americans need to choose between protecting our security and protecting our constitutional right to privacy." 
  • ...1 more annotation...
  • The American Civil Liberties Union (ACLU) also came out in opposition the Senate GOP proposal on Tuesday, warning it would urge lawmakers to vote against it. 
  •  
    Too close for comfort and coming around the bernd again. 
Gary Edwards

Blog | Spritz - 0 views

  • Therein lies one of the biggest problems with traditional RSVP. Each time you see text that is not centered properly on the ORP position, your eyes naturally will look for the ORP to process the word and understand its meaning. This requisite eye movement creates a “saccade”, a physical eye movement caused by your eyes taking a split second to find the proper ORP for a word. Every saccade has a penalty in both time and comprehension, especially when you start to speed up reading. Some saccades are considered by your brain to be “normal” during reading, such as when you move your eye from left to right to go from one ORP position to the next ORP position while reading a book. Other saccades are not normal to your brain during reading, such as when you move your eyes right to left to spot an ORP. This eye movement is akin to trying to read a line of text backwards. In normal reading, you normally won’t saccade right-to-left unless you encounter a word that your brain doesn’t already know and you go back for another look; those saccades will increase based on the difficulty of the text being read and the percentage of words within it that you already know. And the math doesn’t look good, either. If you determined the length of all the words in a given paragraph, you would see that, depending on the language you’re reading, there is a low (less than 15%) probability of two adjacent words being the same length and not requiring a saccade when they are shown to you one at a time. This means you move your eyes on a regular basis with traditional RSVP! In fact, you still move them with almost every word. In general, left-to-right saccades contribute to slower reading due to the increased travel time for the eyeballs, while right-to-left saccades are discombobulating for many people, especially at speed. It’s like reading a lot of text that contains words you don’t understand only you DO understand the words! The experience is frustrating to say the least.
  • In addition to saccading, another issue with RSVP is associated with “foveal vision,” the area in focus when you look at a sentence. This distance defines the number of letters on which your eyes can sharply focus as you read. Its companion is called “parafoveal vision” and refers to the area outside foveal vision that cannot be seen sharply.
  •  
    "To understand Spritz, you must understand Rapid Serial Visual Presentation (RSVP). RSVP is a common speed-reading technique used today. However, RSVP was originally developed for psychological experiments to measure human reactions to content being read. When RSVP was created, there wasn't much digital content and most people didn't have access to it anyway. The internet didn't even exist yet. With traditional RSVP, words are displayed either left-aligned or centered. Figure 1 shows an example of a center-aligned RSVP, with a dashed line on the center axis. When you read a word, your eyes naturally fixate at one point in that word, which visually triggers the brain to recognize the word and process its meaning. In Figure 1, the preferred fixation point (character) is indicated in red. In this figure, the Optimal Recognition Position (ORP) is different for each word. For example, the ORP is only in the middle of a 3-letter word. As the length of a word increases, the percentage that the ORP shifts to the left of center also increases. The longer the word, the farther to the left of center your eyes must move to locate the ORP."
Paul Merrell

Victory for Users: Librarian of Congress Renews and Expands Protections for Fair Uses |... - 0 views

  • The new rules for exemptions to copyright's DRM-circumvention laws were issued today, and the Librarian of Congress has granted much of what EFF asked for over the course of months of extensive briefs and hearings. The exemptions we requested—ripping DVDs and Blurays for making fair use remixes and analysis; preserving video games and running multiplayer servers after publishers have abandoned them; jailbreaking cell phones, tablets, and other portable computing devices to run third party software; and security research and modification and repairs on cars—have each been accepted, subject to some important caveats.
  • The exemptions are needed thanks to a fundamentally flawed law that forbids users from breaking DRM, even if the purpose is a clearly lawful fair use. As software has become ubiquitous, so has DRM.  Users often have to circumvent that DRM to make full use of their devices, from DVDs to games to smartphones and cars. The law allows users to request exemptions for such lawful uses—but it doesn’t make it easy. Exemptions are granted through an elaborate rulemaking process that takes place every three years and places a heavy burden on EFF and the many other requesters who take part. Every exemption must be argued anew, even if it was previously granted, and even if there is no opposition. The exemptions that emerge are limited in scope. What is worse, they only apply to end users—the people who are actually doing the ripping, tinkering, jailbreaking, or research—and not to the people who make the tools that facilitate those lawful activities. The section of the law that creates these restrictions—the Digital Millennium Copyright Act's Section 1201—is fundamentally flawed, has resulted in myriad unintended consequences, and is long past due for reform or removal altogether from the statute books. Still, as long as its rulemaking process exists, we're pleased to have secured the following exemptions.
  • The new rules are long and complicated, and we'll be posting more details about each as we get a chance to analyze them. In the meantime, we hope each of these exemptions enable more exciting fair uses that educate, entertain, improve the underlying technology, and keep us safer. A better long-terms solution, though, is to eliminate the need for this onerous rulemaking process. We encourage lawmakers to support efforts like the Unlocking Technology Act, which would limit the scope of Section 1201 to copyright infringements—not fair uses. And as the White House looks for the next Librarian of Congress, who is ultimately responsible for issuing the exemptions, we hope to get a candidate who acts—as a librarian should—in the interest of the public's access to information.
Paul Merrell

Popular Security Software Came Under Relentless NSA and GCHQ Attacks - The Intercept - 0 views

  • The National Security Agency and its British counterpart, Government Communications Headquarters, have worked to subvert anti-virus and other security software in order to track users and infiltrate networks, according to documents from NSA whistleblower Edward Snowden. The spy agencies have reverse engineered software products, sometimes under questionable legal authority, and monitored web and email traffic in order to discreetly thwart anti-virus software and obtain intelligence from companies about security software and users of such software. One security software maker repeatedly singled out in the documents is Moscow-based Kaspersky Lab, which has a holding registered in the U.K., claims more than 270,000 corporate clients, and says it protects more than 400 million people with its products. British spies aimed to thwart Kaspersky software in part through a technique known as software reverse engineering, or SRE, according to a top-secret warrant renewal request. The NSA has also studied Kaspersky Lab’s software for weaknesses, obtaining sensitive customer information by monitoring communications between the software and Kaspersky servers, according to a draft top-secret report. The U.S. spy agency also appears to have examined emails inbound to security software companies flagging new viruses and vulnerabilities.
  • The efforts to compromise security software were of particular importance because such software is relied upon to defend against an array of digital threats and is typically more trusted by the operating system than other applications, running with elevated privileges that allow more vectors for surveillance and attack. Spy agencies seem to be engaged in a digital game of cat and mouse with anti-virus software companies; the U.S. and U.K. have aggressively probed for weaknesses in software deployed by the companies, which have themselves exposed sophisticated state-sponsored malware.
  • The requested warrant, provided under Section 5 of the U.K.’s 1994 Intelligence Services Act, must be renewed by a government minister every six months. The document published today is a renewal request for a warrant valid from July 7, 2008 until January 7, 2009. The request seeks authorization for GCHQ activities that “involve modifying commercially available software to enable interception, decryption and other related tasks, or ‘reverse engineering’ software.”
  • ...9 more annotations...
  • The NSA, like GCHQ, has studied Kaspersky Lab’s software for weaknesses. In 2008, an NSA research team discovered that Kaspersky software was transmitting sensitive user information back to the company’s servers, which could easily be intercepted and employed to track users, according to a draft of a top-secret report. The information was embedded in “User-Agent” strings included in the headers of Hypertext Transfer Protocol, or HTTP, requests. Such headers are typically sent at the beginning of a web request to identify the type of software and computer issuing the request.
  • According to the draft report, NSA researchers found that the strings could be used to uniquely identify the computing devices belonging to Kaspersky customers. They determined that “Kaspersky User-Agent strings contain encoded versions of the Kaspersky serial numbers and that part of the User-Agent string can be used as a machine identifier.” They also noted that the “User-Agent” strings may contain “information about services contracted for or configurations.” Such data could be used to passively track a computer to determine if a target is running Kaspersky software and thus potentially susceptible to a particular attack without risking detection.
  • Another way the NSA targets foreign anti-virus companies appears to be to monitor their email traffic for reports of new vulnerabilities and malware. A 2010 presentation on “Project CAMBERDADA” shows the content of an email flagging a malware file, which was sent to various anti-virus companies by François Picard of the Montréal-based consulting and web hosting company NewRoma. The presentation of the email suggests that the NSA is reading such messages to discover new flaws in anti-virus software. Picard, contacted by The Intercept, was unaware his email had fallen into the hands of the NSA. He said that he regularly sends out notification of new viruses and malware to anti-virus companies, and that he likely sent the email in question to at least two dozen such outfits. He also said he never sends such notifications to government agencies. “It is strange the NSA would show an email like mine in a presentation,” he added.
  • The NSA presentation goes on to state that its signals intelligence yields about 10 new “potentially malicious files per day for malware triage.” This is a tiny fraction of the hostile software that is processed. Kaspersky says it detects 325,000 new malicious files every day, and an internal GCHQ document indicates that its own system “collect[s] around 100,000,000 malware events per day.” After obtaining the files, the NSA analysts “[c]heck Kaspersky AV to see if they continue to let any of these virus files through their Anti-Virus product.” The NSA’s Tailored Access Operations unit “can repurpose the malware,” presumably before the anti-virus software has been updated to defend against the threat.
  • The Project CAMBERDADA presentation lists 23 additional AV companies from all over the world under “More Targets!” Those companies include Check Point software, a pioneering maker of corporate firewalls based Israel, whose government is a U.S. ally. Notably omitted are the American anti-virus brands McAfee and Symantec and the British company Sophos.
  • As government spies have sought to evade anti-virus software, the anti-virus firms themselves have exposed malware created by government spies. Among them, Kaspersky appears to be the sharpest thorn in the side of government hackers. In the past few years, the company has proven to be a prolific hunter of state-sponsored malware, playing a role in the discovery and/or analysis of various pieces of malware reportedly linked to government hackers, including the superviruses Flame, which Kaspersky flagged in 2012; Gauss, also detected in 2012; Stuxnet, discovered by another company in 2010; and Regin, revealed by Symantec. In February, the Russian firm announced its biggest find yet: the “Equation Group,” an organization that has deployed espionage tools widely believed to have been created by the NSA and hidden on hard drives from leading brands, according to Kaspersky. In a report, the company called it “the most advanced threat actor we have seen” and “probably one of the most sophisticated cyber attack groups in the world.”
  • Hacks deployed by the Equation Group operated undetected for as long as 14 to 19 years, burrowing into the hard drive firmware of sensitive computer systems around the world, according to Kaspersky. Governments, militaries, technology companies, nuclear research centers, media outlets and financial institutions in 30 countries were among those reportedly infected. Kaspersky estimates that the Equation Group could have implants in tens of thousands of computers, but documents published last year by The Intercept suggest the NSA was scaling up their implant capabilities to potentially infect millions of computers with malware. Kaspersky’s adversarial relationship with Western intelligence services is sometimes framed in more sinister terms; the firm has been accused of working too closely with the Russian intelligence service FSB. That accusation is partly due to the company’s apparent success in uncovering NSA malware, and partly due to the fact that its founder, Eugene Kaspersky, was educated by a KGB-backed school in the 1980s before working for the Russian military.
  • Kaspersky has repeatedly denied the insinuations and accusations. In a recent blog post, responding to a Bloomberg article, he complained that his company was being subjected to “sensationalist … conspiracy theories,” sarcastically noting that “for some reason they forgot our reports” on an array of malware that trace back to Russian developers. He continued, “It’s very hard for a company with Russian roots to become successful in the U.S., European and other markets. Nobody trusts us — by default.”
  • Documents published with this article: Kaspersky User-Agent Strings — NSA Project CAMBERDADA — NSA NDIST — GCHQ’s Developing Cyber Defence Mission GCHQ Application for Renewal of Warrant GPW/1160 Software Reverse Engineering — GCHQ Reverse Engineering — GCHQ Wiki Malware Analysis & Reverse Engineering — ACNO Skill Levels — GCHQ
Paul Merrell

US websites should inform EU citizens about NSA surveillance, says report - 0 views

  • All existing data sharing agreements between Europe and the US should be revoked, and US web site providers should prominently inform European citizens that their data may be subject to government surveillance, according to the recommendations of a briefing report for the European Parliament. The report was produced in response to revelations about the US National Security Agency (NSA) snooping on internet traffic, and aims to highlight the subsequent effect on European Union (EU) citizens' rights.
  • The report warns that EU data protection authorities have failed to understand the “structural shift of data sovereignty implied by cloud computing”, and the associated risks to the rights of EU citizens. It suggests “a full industrial policy for development of an autonomous European cloud computing capacity” should be set up to reduce exposure of EU data to NSA surveillance that is undertaken by the use of US legislation that forces US-based cloud providers to provide access to data they hold.
  • To put pressure on the US government, the report recommends that US websites should ask EU citizens for their consent before gathering data that could be used by the NSA. “Prominent notices should be displayed by every US web site offering services in the EU to inform consent to collect data from EU citizens. The users should be made aware that the data may be subject to surveillance by the US government for any purpose which furthers US foreign policy,” it said. “A consent requirement will raise EU citizen awareness and favour growth of services solely within EU jurisdiction. This will thus have economic impact on US business and increase pressure on the US government to reach a settlement.”
  • ...2 more annotations...
  • Other recommendations include the EU offering protection and rewards for whistleblowers, including “strong guarantees of immunity and asylum”. Such a move would be seen as a direct response to the plight of Edward Snowden, the former NSA analyst who leaked documents that revealed the extent of the NSA’s global internet surveillance programmes. The report also says that, “Encryption is futile to defend against NSA accessing data processed by US clouds,” and that there is “no technical solution to the problem”. It calls for the EU to press for changes to US law.
  • “It seems that the only solution which can be trusted to resolve the Prism affair must involve changes to the law of the US, and this should be the strategic objective of the EU,” it said. The report was produced for the European Parliament committee on civil liberties, justice and home affairs, and comes before the latest hearing of an inquiry into electronic mass surveillance of EU citizens, due to take place in Brussels on 24 September.
  •  
    Yee-haw! E.U. sanctuary and rewards for NSA whistle-blowers. Mandatory warnings for customers of U.S. cloud services that their data may be turned over to the NSA. Pouring more gasoline on the NSA diplomatic fire. 
Paul Merrell

Yahoo breaks every mailing list in the world including the IETF's - 0 views

  • DMARC is what one might call an emerging e-mail security scheme. There's a draft on it at draft-kucherawy-dmarc-base-04, intended for the independent stream. It's emerging pretty fast, since many of the largest mail systems in the world have already implemented it, including Gmail, Hotmail/MSN/Outlook, Comcast, and Yahoo.
  • The reason this matters is that over the weekend Yahoo published a DMARC record with a policy saying to reject all yahoo.com mail that fails DMARC. I noticed this because I got a blizzard of bounces from my church mailing list, when a subscriber sent a message from her yahoo.com account, and the list got a whole bunch of rejections from gmail, Yahoo, Hotmail, Comcast, and Yahoo itself. This is definitely a DMARC problem, the bounces say so. The problem for mailing lists isn't limited to the Yahoo subscribers. Since Yahoo mail provokes bounces from lots of other mail systems, innocent subscribers at Gmail, Hotmail, etc. not only won't get Yahoo subscribers' messages, but all those bounces are likely to bounce them off the lists. A few years back we had a similar problem due to an overstrict implementation of DKIM ADSP, but in this case, DMARC is doing what Yahoo is telling it to do. Suggestions: * Suspend posting permission of all yahoo.com addresses, to limit damage * Tell Yahoo users to get a new mail account somewhere else, pronto, if they want to continue using mailing lists * If you know people at Yahoo, ask if perhaps this wasn't such a good idea
  •  
    Short story: Check your SPAM folder for email from folks who email you from Yahoo accounts. That's where it's currently going. (They got rid of the first bug but created a new one in the process. Your Spam folder is where they're currently being routed.)
Paul Merrell

CISPA is back! - 0 views

  • OPERATION: Fax Big Brother Congress is rushing toward a vote on CISA, the worst spying bill yet. CISA would grant sweeping legal immunity to giant companies like Facebook and Google, allowing them to do almost anything they want with your data. In exchange, they'll share even more of your personal information with the government, all in the name of "cybersecurity." CISA won't stop hackers — Congress is stuck in 1984 and doesn't understand modern technology. So this week we're sending them thousands of faxes — technology that is hopefully old enough for them to understand. Stop CISA. Send a fax now!
  • (Any tweet w/ #faxbigbrother will get faxed too!) Your email is only shown in your fax to Congress. We won't add you to any mailing lists.
  • CISA: the dirty deal between government and corporate giants. It's the dirty deal that lets much of government from the NSA to local police get your private data from your favorite websites and lets them use it without due process. The government is proposing a massive bribe—they will give corporations immunity for breaking virtually any law if they do so while providing the NSA, DHS, DEA, and local police surveillance access to everyone's data in exchange for getting away with crimes, like fraud, money laundering, or illegal wiretapping. Specifically it incentivizes companies to automatically and simultaneously transfer your data to the DHS, NSA, FBI, and local police with all of your personally-indentifying information by giving companies legal immunity (notwithstanding any law), and on top of that, you can't use the Freedom of Information Act to find out what has been shared.
  • ...1 more annotation...
  • The NSA and members of Congress want to pass a "cybersecurity" bill so badly, they’re using the recent hack of the Office of Personnel Management as justification for bringing CISA back up and rushing it through. In reality, the OPM hack just shows that the government has not been a good steward of sensitive data and they need to institute real security measures to fix their problems. The truth is that CISA could not have prevented the OPM hack, and no Senator could explain how it could have. Congress and the NSA are using irrational hysteria to turn the Internet into a place where the government has overly broad, unchecked powers. Why Faxes? Since 2012, online and civil liberties groups and 30,000+ sites have driven more than 2.6 million emails and hundreds of thousands of calls, tweets and more to Congress opposing overly broad cybersecurity legislation. Congress has tried to pass CISA in one form or another 4 times, and they were beat back every time by people like you. It's clear Congress is completely out of touch with modern technology, so this week, as Congress rushes toward a vote on CISA, we are going to send them thousands of faxes, a technology from the 1980s that is hopefully antiquated enough for them to understand. Sending a fax is super easy — you can use this page to send a fax. Any tweet with the hashtag #faxbigbrother will get turned into a fax to Congress too, so what are you waiting for? Click here to send a fax now!
Paul Merrell

Rapid - Press Releases - EUROPA - 0 views

  • Did the Commission co-operate with the United States on this case? The Commission and the United States Federal Trade Commission have kept each other regularly and closely informed on the state of play of their respective Intel investigations. These discussions have been held in a co-operative and friendly atmosphere, and have been substantively fruitful in terms of sharing experiences on issues of common interest.
  • Where does the money go? Once final judgment has been delivered in any appeals before the Court of First Instance (CFI) and the Court of Justice, the money goes into the EU’s central budget, thus reducing the contributions that Member States pay to the EU. Does Intel have to pay the fine if it appeals to the European Court of First Instance (CFI)? Yes. In case of appeals to the CFI, it is normal practice that the fine is paid into a blocked bank account pending the final outcome of the appeals process. Any fine that is provisionally paid will produce interest based on the interest rate applied by the European Central Bank to its main refinancing operations. In exceptional circumstances, companies may be allowed to cover the amount of the fine by a bank guarantee at a higher interest rate. What percentage of Intel's turnover does the fine represent? The fine represents 4.15 % of Intel's turnover in 2008. This is less than half the allowable maximum, which is 10% of a company's annual turnover.
  • How long is the Decision? The Decision is 542 pages long. When is the Decision going to be published? The Decision in English (the official language version of the Decision) will be made available as soon as possible on DG Competition’s website (once relevant business secrets have been taken out). French and German translations will also be made available on DG Competition’s website in due course. A summary of the Decision will be published in the EU's Official Journal L series in all languages (once the translations are available).
Paul Merrell

ExposeFacts - For Whistleblowers, Journalism and Democracy - 0 views

  • Launched by the Institute for Public Accuracy in June 2014, ExposeFacts.org represents a new approach for encouraging whistleblowers to disclose information that citizens need to make truly informed decisions in a democracy. From the outset, our message is clear: “Whistleblowers Welcome at ExposeFacts.org.” ExposeFacts aims to shed light on concealed activities that are relevant to human rights, corporate malfeasance, the environment, civil liberties and war. At a time when key provisions of the First, Fourth and Fifth Amendments are under assault, we are standing up for a free press, privacy, transparency and due process as we seek to reveal official information—whether governmental or corporate—that the public has a right to know. While no software can provide an ironclad guarantee of confidentiality, ExposeFacts—assisted by the Freedom of the Press Foundation and its “SecureDrop” whistleblower submission system—is utilizing the latest technology on behalf of anonymity for anyone submitting materials via the ExposeFacts.org website. As journalists we are committed to the goal of protecting the identity of every source who wishes to remain anonymous.
  • The seasoned editorial board of ExposeFacts will be assessing all the submitted material and, when deemed appropriate, will arrange for journalistic release of information. In exercising its judgment, the editorial board is able to call on the expertise of the ExposeFacts advisory board, which includes more than 40 journalists, whistleblowers, former U.S. government officials and others with wide-ranging expertise. We are proud that Pentagon Papers whistleblower Daniel Ellsberg was the first person to become a member of the ExposeFacts advisory board. The icon below links to a SecureDrop implementation for ExposeFacts overseen by the Freedom of the Press Foundation and is only accessible using the Tor browser. As the Freedom of the Press Foundation notes, no one can guarantee 100 percent security, but this provides a “significantly more secure environment for sources to get information than exists through normal digital channels, but there are always risks.” ExposeFacts follows all guidelines as recommended by Freedom of the Press Foundation, and whistleblowers should too; the SecureDrop onion URL should only be accessed with the Tor browser — and, for added security, be running the Tails operating system. Whistleblowers should not log-in to SecureDrop from a home or office Internet connection, but rather from public wifi, preferably one you do not frequent. Whistleblowers should keep to a minimum interacting with whistleblowing-related websites unless they are using such secure software.
  •  
    A new resource site for whistle-blowers. somewhat in the tradition of Wikileaks, but designed for encrypted communications between whistleblowers and journalists.  This one has an impressive board of advisors that includes several names I know and tend to trust, among them former whistle-blowers Daniel Ellsberg, Ray McGovern, Thomas Drake, William Binney, and Ann Wright. Leaked records can only be dropped from a web browser running the Tor anonymizer software and uses the SecureDrop system originally developed by Aaron Schwartz. They strongly recommend using the Tails secure operating system that can be installed to a thumb drive and leaves no tracks on the host machine. https://tails.boum.org/index.en.html Curious, I downloaded Tails and installed it to a virtual machine. It's a heavily customized version of Debian. It has a very nice Gnome desktop and blocks any attempt to connect to an external network by means other than installed software that demands encrypted communications. For example, web sites can only be viewed via the Tor anonymizing proxy network. It does take longer for web pages to load because they are moving over a chain of proxies, but even so it's faster than pages loaded in the dial-up modem days, even for web pages that are loaded with graphics, javascript, and other cruft. E.g., about 2 seconds for New York Times pages. All cookies are treated by default as session cookies so disappear when you close the page or the browser. I love my Linux Mint desktop, but I am thinking hard about switching that box to Tails. I've been looking for methods to send a lot more encrypted stuff down the pipe for NSA to store. Tails looks to make that not only easy, but unavoidable. From what I've gathered so far, if you want to install more software on Tails, it takes about an hour to create a customized version and then update your Tails installation from a new ISO file. Tails has a wonderful odor of having been designed for secure computing. Current
Paul Merrell

Beware the Dangers of Congress' Latest Cybersecurity Bill | American Civil Liberties Union - 0 views

  • A new cybersecurity bill poses serious threats to our privacy, gives the government extraordinary powers to silence potential whistleblowers, and exempts these dangerous new powers from transparency laws. The Cybersecurity Information Sharing Act of 2014 ("CISA") was scheduled to be marked up by the Senate Intelligence Committee yesterday but has been delayed until after next week's congressional recess. The response to the proposed legislation from the privacy, civil liberties, tech, and open government communities was quick and unequivocal – this bill must not go through. The bill would create a massive loophole in our existing privacy laws by allowing the government to ask companies for "voluntary" cooperation in sharing information, including the content of our communications, for cybersecurity purposes. But the definition they are using for the so-called "cybersecurity information" is so broad it could sweep up huge amounts of innocent Americans' personal data. The Fourth Amendment protects Americans' personal data and communications from undue government access and monitoring without suspicion of criminal activity. The point of a warrant is to guard that protection. CISA would circumvent the warrant requirement by allowing the government to approach companies directly to collect personal information, including telephonic or internet communications, based on the new broadly drawn definition of "cybersecurity information."
  • While we hope many companies would jealously guard their customers' information, there is a provision in the bill that would excuse sharers from any liability if they act in "good faith" that the sharing was lawful. Collected information could then be used in criminal proceedings, creating a dangerous end-run around laws like the Electronic Communications Privacy Act, which contain warrant requirements. In addition to the threats to every American's privacy, the bill clearly targets potential government whistleblowers. Instead of limiting the use of data collection to protect against actual cybersecurity threats, the bill allows the government to use the data in the investigation and prosecution of people for economic espionage and trade secret violations, and under various provisions of the Espionage Act. It's clear that the law is an attempt to give the government more power to crack down on whistleblowers, or "insider threats," in popular bureaucratic parlance. The Obama Administration has brought more "leaks" prosecutions against government whistleblowers and members of the press than all previous administrations combined. If misused by this or future administrations, CISA could eliminate due process protections for such investigations, which already favor the prosecution.
  • While actively stripping Americans' privacy protections, the bill also cloaks "cybersecurity"-sharing in secrecy by exempting it from critical government transparency protections. It unnecessarily and dangerously provides exemptions from state and local sunshine laws as well as the federal Freedom of Information Act. These are both powerful tools that allow citizens to check government activities and guard against abuse. Edward Snowden's revelations from the past year, of invasive spying programs like PRSIM and Stellar Wind, have left Americans shocked and demanding more transparency by government agencies. CISA, however, flies in the face of what the public clearly wants. (Two coalition letters, here and here, sent to key members of the Senate yesterday detail the concerns of a broad coalition of organizations, including the ACLU.)
  •  
    Text of the bill is on Sen. Diane Feinstein's site, http://goo.gl/2cdsSA It is truly a bummer.
Paul Merrell

The Newest Reforms on SIGINT Collection Still Leave Loopholes | Just Security - 0 views

  • Director of National Intelligence James Clapper this morning released a report detailing new rules aimed at reforming the way signals intelligence is collected and stored by certain members of the United States Intelligence Community (IC). The long-awaited changes follow up on an order announced by President Obama one year ago that laid out the White House’s principles governing the collection of signals intelligence. That order, commonly known as PPD-28, purports to place limits on the use of data collected in bulk and to increase privacy protections related to the data collected, regardless of nationality. Accordingly, most of the changes presented as “new” by Clapper’s office  (ODNI) stem directly from the guidance provided in PPD-28, and so aren’t truly new. And of the biggest changes outlined in the report, there are still large exceptions that appear to allow the government to escape the restrictions with relative ease. Here’s a quick rundown.
  • National security letters (NSLs). The report also states that the FBI’s gag orders related to NSLs expire three years after the opening of a full-blown investigation or three years after an investigation’s close, whichever is earlier. However, these expiration dates can be easily overridden by by an FBI Special Agent in Charge or a Deputy Assistant FBI Director who finds that the statutory standards for secrecy about the NSL continue to be satisfied (which at least one court has said isn’t a very high bar). This exception also doesn’t address concerns that NSL gag orders lack adequate due process protections, lack basic judicial oversight, and may violate the First Amendment.
  • Retention policy for non-U.S. persons. The new rules say that the IC must now delete information about “non-U.S. persons” that’s been gathered via signals intelligence after five-years. However, there is a loophole that will let spies hold onto that information indefinitely whenever the Director of National Intelligence determines (after considering the views of the ODNI’s Civil Liberties Protection Officer) that retaining information is in the interest of national security. The new rules don’t say whether the exceptions will be directed at entire groups of people or individual surveillance targets.  Section 215 metadata. Updates to the rules concerning the use of data collected under Section 215 of the Patriot Act includes the requirement that the Foreign Intelligence Surveillance Court (rather than authorized NSA officials) must determine spies have “reasonable, articulable suspicion” prior to query Section 215 data, outside of emergency circumstances. What qualifies as an emergency for these purposes? We don’t know. Additionally, the IC is now limited to two “hops” in querying the database. This means that spies can only play two degrees of Kevin Bacon, instead of the previously allowed three degrees, with the contacts of anyone targeted under Section 215. The report doesn’t explain what would prevent the NSA (or other agency using the 215 databases) from getting around this limit by redesignating a phone number found in the first or second hop as a new “target,” thereby allowing the agency to continue the contact chain.
  • ...1 more annotation...
  • The report also details the ODNI’s and IC’s plans for the future, including: (1) Working with Congress to reauthorize bulk collection under Section 215. (2) Updating agency guidelines under Executive Order 12333 “to protect the privacy and civil liberties of U.S. persons.” (3) Producing another annual report in January 2016 on the IC’s progress in implementing signals intelligence reforms. These plans raise more questions than they answer. Given the considerable doubts about Section 215’s effectiveness, why is the ODNI pushing for its reauthorization? And what will the ODNI consider appropriate privacy protections under Executive Order 12333?
Paul Merrell

European Parliament Urges Protection for Edward Snowden - The New York Times - 0 views

  • The European Parliament narrowly adopted a nonbinding but nonetheless forceful resolution on Thursday urging the 28 nations of the European Union to recognize Edward J. Snowden as a “whistle-blower and international human rights defender” and shield him from prosecution.On Twitter, Mr. Snowden, the former National Security Agency contractor who leaked millions of documents about electronic surveillance by the United States government, called the vote a “game-changer.” But the resolution has no legal force and limited practical effect for Mr. Snowden, who is living in Russia on a three-year residency permit.Whether to grant Mr. Snowden asylum remains a decision for the individual European governments, and none have done so thus far. Continue reading the main story Related Coverage Open Source: Now Following the N.S.A. on Twitter, @SnowdenSEPT. 29, 2015 Snowden Sees Some Victories, From a DistanceMAY 19, 2015 Still, the resolution was the strongest statement of support seen for Mr. Snowden from the European Parliament. At the same time, the close vote — 285 to 281 — suggested the extent to which some European lawmakers are wary of alienating the United States.
  • The resolution calls on European Union members to “drop any criminal charges against Edward Snowden, grant him protection and consequently prevent extradition or rendition by third parties.”In June 2013, shortly after Mr. Snowden’s leaks became public, the United States charged him with theft of government property and violations of the Espionage Act of 1917. By then, he had flown to Moscow, where he spent weeks in legal limbo before he was granted temporary asylum and, later, a residency permit.Four Latin American nations have offered him permanent asylum, but he does not believe he could travel from Russia to those countries without running the risk of arrest and extradition to the United States along the way.
  • The White House, which has used diplomatic efforts to discourage even symbolic resolutions of support for Mr. Snowden, immediately criticized the resolution.“Our position has not changed,” said Ned Price, a spokesman for the National Security Council in Washington.“Mr. Snowden is accused of leaking classified information and faces felony charges here in the United States. As such, he should be returned to the U.S. as soon as possible, where he will be accorded full due process.”Jan Philipp Albrecht, one of the lawmakers who sponsored the resolution in Europe, said it should increase pressure on national governments.
  • ...1 more annotation...
  • “It’s the first time a Parliament votes to ask for this to be done — and it’s the European Parliament,” Mr. Albrecht, a German lawmaker with the Greens political bloc, said in a phone interview shortly after the vote, which was held in Strasbourg, France. “So this has an impact surely on the debate in the member states.”The resolution “is asking or demanding the member states’ governments to end all the charges and to prevent any extradition to a third party,” Mr. Albrecht said. “That’s a very clear call, and that can’t be just ignored by the governments,” he said.
Paul Merrell

The All Writs Act, Software Licenses, and Why Judges Should Ask More Questions | Just S... - 0 views

  • Pending before federal magistrate judge James Orenstein is the government’s request for an order obligating Apple, Inc. to unlock an iPhone and thereby assist prosecutors in decrypting data the government has seized and is authorized to search pursuant to a warrant. In an order questioning the government’s purported legal basis for this request, the All Writs Act of 1789 (AWA), Judge Orenstein asked Apple for a brief informing the court whether the request would be technically feasible and/or burdensome. After Apple filed, the court asked it to file a brief discussing whether the government had legal grounds under the AWA to compel Apple’s assistance. Apple filed that brief and the government filed a reply brief last week in the lead-up to a hearing this morning.
  • We’ve long been concerned about whether end users own software under the law. Software owners have rights of adaptation and first sale enshrined in copyright law. But software publishers have claimed that end users are merely licensees, and our rights under copyright law can be waived by mass-market end user license agreements, or EULAs. Over the years, Granick has argued that users should retain their rights even if mass-market licenses purport to take them away. The government’s brief takes advantage of Apple’s EULA for iOS to argue that Apple, the software publisher, is responsible for iPhones around the world. Apple’s EULA states that when you buy an iPhone, you’re not buying the iOS software it runs, you’re just licensing it from Apple. The government argues that having designed a passcode feature into a copy of software which it owns and licenses rather than sells, Apple can be compelled under the All Writs Act to bypass the passcode on a defendant’s iPhone pursuant to a search warrant and thereby access the software owned by Apple. Apple’s supplemental brief argues that in defining its users’ contractual rights vis-à-vis Apple with regard to Apple’s intellectual property, Apple in no way waived its own due process rights vis-à-vis the government with regard to users’ devices. Apple’s brief compares this argument to forcing a car manufacturer to “provide law enforcement with access to the vehicle or to alter its functionality at the government’s request” merely because the car contains licensed software. 
  • This is an interesting twist on the decades-long EULA versus users’ rights fight. As far as we know, this is the first time that the government has piggybacked on EULAs to try to compel software companies to provide assistance to law enforcement. Under the government’s interpretation of the All Writs Act, anyone who makes software could be dragooned into assisting the government in investigating users of the software. If the court adopts this view, it would give investigators immense power. The quotidian aspects of our lives increasingly involve software (from our cars to our TVs to our health to our home appliances), and most of that software is arguably licensed, not bought. Conscripting software makers to collect information on us would afford the government access to the most intimate information about us, on the strength of some words in some license agreements that people never read. (And no wonder: The iPhone’s EULA came to over 300 pages when the government filed it as an exhibit to its brief.)
  • ...1 more annotation...
  • The government’s brief does not acknowledge the sweeping implications of its arguments. It tries to portray its requested unlocking order as narrow and modest, because it “would not require Apple to make any changes to its software or hardware, … [or] to introduce any new ability to access data on its phones. It would simply require Apple to use its existing capability to bypass the passcode on a passcode-locked iOS 7 phone[.]” But that undersells the implications of the legal argument the government is making: that anything a company already can do, it could be compelled to do under the All Writs Act in order to assist law enforcement. Were that the law, the blow to users’ trust in their encrypted devices, services, and products would be little different than if Apple and other companies were legally required to design backdoors into their encryption mechanisms (an idea the government just can’t seem to drop, its assurances in this brief notwithstanding). Entities around the world won’t buy security software if its makers cannot be trusted not to hand over their users’ secrets to the US government. That’s what makes the encryption in iOS 8 and later versions, which Apple has told the court it “would not have the technical ability” to bypass, so powerful — and so despised by the government: Because no matter how broadly the All Writs Act extends, no court can compel Apple to do the impossible.
Paul Merrell

From Radio to Porn, British Spies Track Web Users' Online Identities - 1 views

  • HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the Internet.” Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media and news websites, search engines, chat forums, and blogs. The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into Internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
  • Amid a renewed push from the U.K. government for more surveillance powers, more than two dozen documents being disclosed today by The Intercept reveal for the first time several major strands of GCHQ’s existing electronic eavesdropping capabilities.
  • The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails and Internet browsing logs of Brits, Americans, and any other citizens — all without a court order or judicial warrant
  • ...17 more annotations...
  • A huge volume of the Internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis. Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events” — a term the agency uses to refer to metadata records — with about 10 billion new entries added every day. As of March 2009, the largest slice of data Black Hole held — 41 percent — was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.
  • Throughout this period, as smartphone sales started to boom, the frequency of people’s Internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data. By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it said would be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.”
  • A document from the GCHQ target analysis center (GTAC) shows the Black Hole repository’s structure.
  • The data is searched by GCHQ analysts in a hunt for behavior online that could be connected to terrorism or other criminal activity. But it has also served a broader and more controversial purpose — helping the agency hack into European companies’ computer networks. In the lead up to its secret mission targeting Netherlands-based Gemalto, the largest SIM card manufacturer in the world, GCHQ used MUTANT BROTH in an effort to identify the company’s employees so it could hack into their computers. The system helped the agency analyze intercepted Facebook cookies it believed were associated with Gemalto staff located at offices in France and Poland. GCHQ later successfully infiltrated Gemalto’s internal networks, stealing encryption keys produced by the company that protect the privacy of cell phone communications.
  • Similarly, MUTANT BROTH proved integral to GCHQ’s hack of Belgian telecommunications provider Belgacom. The agency entered IP addresses associated with Belgacom into MUTANT BROTH to uncover information about the company’s employees. Cookies associated with the IPs revealed the Google, Yahoo, and LinkedIn accounts of three Belgacom engineers, whose computers were then targeted by the agency and infected with malware. The hacking operation resulted in GCHQ gaining deep access into the most sensitive parts of Belgacom’s internal systems, granting British spies the ability to intercept communications passing through the company’s networks.
  • In March, a U.K. parliamentary committee published the findings of an 18-month review of GCHQ’s operations and called for an overhaul of the laws that regulate the spying. The committee raised concerns about the agency gathering what it described as “bulk personal datasets” being held about “a wide range of people.” However, it censored the section of the report describing what these “datasets” contained, despite acknowledging that they “may be highly intrusive.” The Snowden documents shine light on some of the core GCHQ bulk data-gathering programs that the committee was likely referring to — pulling back the veil of secrecy that has shielded some of the agency’s most controversial surveillance operations from public scrutiny. KARMA POLICE and MUTANT BROTH are among the key bulk collection systems. But they do not operate in isolation — and the scope of GCHQ’s spying extends far beyond them.
  • The agency operates a bewildering array of other eavesdropping systems, each serving its own specific purpose and designated a unique code name, such as: SOCIAL ANTHROPOID, which is used to analyze metadata on emails, instant messenger chats, social media connections and conversations, plus “telephony” metadata about phone calls, cell phone locations, text and multimedia messages; MEMORY HOLE, which logs queries entered into search engines and associates each search with an IP address; MARBLED GECKO, which sifts through details about searches people have entered into Google Maps and Google Earth; and INFINITE MONKEYS, which analyzes data about the usage of online bulletin boards and forums. GCHQ has other programs that it uses to analyze the content of intercepted communications, such as the full written body of emails and the audio of phone calls. One of the most important content collection capabilities is TEMPORA, which mines vast amounts of emails, instant messages, voice calls and other communications and makes them accessible through a Google-style search tool named XKEYSCORE.
  • As of September 2012, TEMPORA was collecting “more than 40 billion pieces of content a day” and it was being used to spy on people across Europe, the Middle East, and North Africa, according to a top-secret memo outlining the scope of the program. The existence of TEMPORA was first revealed by The Guardian in June 2013. To analyze all of the communications it intercepts and to build a profile of the individuals it is monitoring, GCHQ uses a variety of different tools that can pull together all of the relevant information and make it accessible through a single interface. SAMUEL PEPYS is one such tool, built by the British spies to analyze both the content and metadata of emails, browsing sessions, and instant messages as they are being intercepted in real time. One screenshot of SAMUEL PEPYS in action shows the agency using it to monitor an individual in Sweden who visited a page about GCHQ on the U.S.-based anti-secrecy website Cryptome.
  • Partly due to the U.K.’s geographic location — situated between the United States and the western edge of continental Europe — a large amount of the world’s Internet traffic passes through its territory across international data cables. In 2010, GCHQ noted that what amounted to “25 percent of all Internet traffic” was transiting the U.K. through some 1,600 different cables. The agency said that it could “survey the majority of the 1,600” and “select the most valuable to switch into our processing systems.”
  • According to Joss Wright, a research fellow at the University of Oxford’s Internet Institute, tapping into the cables allows GCHQ to monitor a large portion of foreign communications. But the cables also transport masses of wholly domestic British emails and online chats, because when anyone in the U.K. sends an email or visits a website, their computer will routinely send and receive data from servers that are located overseas. “I could send a message from my computer here [in England] to my wife’s computer in the next room and on its way it could go through the U.S., France, and other countries,” Wright says. “That’s just the way the Internet is designed.” In other words, Wright adds, that means “a lot” of British data and communications transit across international cables daily, and are liable to be swept into GCHQ’s databases.
  • A map from a classified GCHQ presentation about intercepting communications from undersea cables. GCHQ is authorized to conduct dragnet surveillance of the international data cables through so-called external warrants that are signed off by a government minister. The external warrants permit the agency to monitor communications in foreign countries as well as British citizens’ international calls and emails — for example, a call from Islamabad to London. They prohibit GCHQ from reading or listening to the content of “internal” U.K. to U.K. emails and phone calls, which are supposed to be filtered out from GCHQ’s systems if they are inadvertently intercepted unless additional authorization is granted to scrutinize them. However, the same rules do not apply to metadata. A little-known loophole in the law allows GCHQ to use external warrants to collect and analyze bulk metadata about the emails, phone calls, and Internet browsing activities of British people, citizens of closely allied countries, and others, regardless of whether the data is derived from domestic U.K. to U.K. communications and browsing sessions or otherwise. In March, the existence of this loophole was quietly acknowledged by the U.K. parliamentary committee’s surveillance review, which stated in a section of its report that “special protection and additional safeguards” did not apply to metadata swept up using external warrants and that domestic British metadata could therefore be lawfully “returned as a result of searches” conducted by GCHQ.
  • Perhaps unsurprisingly, GCHQ appears to have readily exploited this obscure legal technicality. Secret policy guidance papers issued to the agency’s analysts instruct them that they can sift through huge troves of indiscriminately collected metadata records to spy on anyone regardless of their nationality. The guidance makes clear that there is no exemption or extra privacy protection for British people or citizens from countries that are members of the Five Eyes, a surveillance alliance that the U.K. is part of alongside the U.S., Canada, Australia, and New Zealand. “If you are searching a purely Events only database such as MUTANT BROTH, the issue of location does not occur,” states one internal GCHQ policy document, which is marked with a “last modified” date of July 2012. The document adds that analysts are free to search the databases for British metadata “without further authorization” by inputing a U.K. “selector,” meaning a unique identifier such as a person’s email or IP address, username, or phone number. Authorization is “not needed for individuals in the U.K.,” another GCHQ document explains, because metadata has been judged “less intrusive than communications content.” All the spies are required to do to mine the metadata troves is write a short “justification” or “reason” for each search they conduct and then click a button on their computer screen.
  • Intelligence GCHQ collects on British persons of interest is shared with domestic security agency MI5, which usually takes the lead on spying operations within the U.K. MI5 conducts its own extensive domestic surveillance as part of a program called DIGINT (digital intelligence).
  • GCHQ’s documents suggest that it typically retains metadata for periods of between 30 days to six months. It stores the content of communications for a shorter period of time, varying between three to 30 days. The retention periods can be extended if deemed necessary for “cyber defense.” One secret policy paper dated from January 2010 lists the wide range of information the agency classes as metadata — including location data that could be used to track your movements, your email, instant messenger, and social networking “buddy lists,” logs showing who you have communicated with by phone or email, the passwords you use to access “communications services” (such as an email account), and information about websites you have viewed.
  • Records showing the full website addresses you have visited — for instance, www.gchq.gov.uk/what_we_do — are treated as content. But the first part of an address you have visited — for instance, www.gchq.gov.uk — is treated as metadata. In isolation, a single metadata record of a phone call, email, or website visit may not reveal much about a person’s private life, according to Ethan Zuckerman, director of Massachusetts Institute of Technology’s Center for Civic Media. But if accumulated and analyzed over a period of weeks or months, these details would be “extremely personal,” he told The Intercept, because they could reveal a person’s movements, habits, religious beliefs, political views, relationships, and even sexual preferences. For Zuckerman, who has studied the social and political ramifications of surveillance, the most concerning aspect of large-scale government data collection is that it can be “corrosive towards democracy” — leading to a chilling effect on freedom of expression and communication. “Once we know there’s a reasonable chance that we are being watched in one fashion or another it’s hard for that not to have a ‘panopticon effect,’” he said, “where we think and behave differently based on the assumption that people may be watching and paying attention to what we are doing.”
  • When compared to surveillance rules in place in the U.S., GCHQ notes in one document that the U.K. has “a light oversight regime.” The more lax British spying regulations are reflected in secret internal rules that highlight greater restrictions on how NSA databases can be accessed. The NSA’s troves can be searched for data on British citizens, one document states, but they cannot be mined for information about Americans or other citizens from countries in the Five Eyes alliance. No such constraints are placed on GCHQ’s own databases, which can be sifted for records on the phone calls, emails, and Internet usage of Brits, Americans, and citizens from any other country. The scope of GCHQ’s surveillance powers explain in part why Snowden told The Guardian in June 2013 that U.K. surveillance is “worse than the U.S.” In an interview with Der Spiegel in July 2013, Snowden added that British Internet cables were “radioactive” and joked: “Even the Queen’s selfies to the pool boy get logged.”
  • In recent years, the biggest barrier to GCHQ’s mass collection of data does not appear to have come in the form of legal or policy restrictions. Rather, it is the increased use of encryption technology that protects the privacy of communications that has posed the biggest potential hindrance to the agency’s activities. “The spread of encryption … threatens our ability to do effective target discovery/development,” says a top-secret report co-authored by an official from the British agency and an NSA employee in 2011. “Pertinent metadata events will be locked within the encrypted channels and difficult, if not impossible, to prise out,” the report says, adding that the agencies were working on a plan that would “(hopefully) allow our Internet Exploitation strategy to prevail.”
1 - 19 of 19
Showing 20 items per page