Skip to main content

Home/ Future of the Web/ Group items tagged long since

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

What to Do About Lawless Government Hacking and the Weakening of Digital Security | Ele... - 0 views

  • In our society, the rule of law sets limits on what government can and cannot do, no matter how important its goals. To give a simple example, even when chasing a fleeing murder suspect, the police have a duty not to endanger bystanders. The government should pay the same care to our safety in pursuing threats online, but right now we don’t have clear, enforceable rules for government activities like hacking and "digital sabotage." And this is no abstract question—these actions increasingly endanger everyone’s security
  • The problem became especially clear this year during the San Bernardino case, involving the FBI’s demand that Apple rewrite its iOS operating system to defeat security features on a locked iPhone. Ultimately the FBI exploited an existing vulnerability in iOS and accessed the contents of the phone with the help of an "outside party." Then, with no public process or discussion of the tradeoffs involved, the government refused to tell Apple about the flaw. Despite the obvious fact that the security of the computers and networks we all use is both collective and interwoven—other iPhones used by millions of innocent people presumably have the same vulnerability—the government chose to withhold information Apple could have used to improve the security of its phones. Other examples include intelligence activities like Stuxnet and Bullrun, and law enforcement investigations like the FBI’s mass use of malware against Tor users engaged in criminal behavior. These activities are often disproportionate to stopping legitimate threats, resulting in unpatched software for millions of innocent users, overbroad surveillance, and other collateral effects.  That’s why we’re working on a positive agenda to confront governmental threats to digital security. Put more directly, we’re calling on lawyers, advocates, technologists, and the public to demand a public discussion of whether, when, and how governments can be empowered to break into our computers, phones, and other devices; sabotage and subvert basic security protocols; and stockpile and exploit software flaws and vulnerabilities.  
  • Smart people in academia and elsewhere have been thinking and writing about these issues for years. But it’s time to take the next step and make clear, public rules that carry the force of law to ensure that the government weighs the tradeoffs and reaches the right decisions. This long post outlines some of the things that can be done. It frames the issue, then describes some of the key areas where EFF is already pursuing this agenda—in particular formalizing the rules for disclosing vulnerabilities and setting out narrow limits for the use of government malware. Finally it lays out where we think the debate should go from here.   
  •  
    "In our society, the rule of law sets limits on what government can and cannot do, no matter how important its goals. "
  •  
    It's not often that I disagree with EFF's positions, but on this one I do. The government should be prohibited from exploiting computer vulnerabilities and should be required to immediately report all vulnerabilities discovered to the relevant developers of hardware or software. It's been one long slippery slope since the Supreme Court first approved wiretapping in Olmstead v. United States, 277 US 438 (1928), https://goo.gl/NJevsr (.) Left undecided to this day is whether we have a right to whisper privately, a right that is undeniable. All communications intercept cases since Olmstead fly directly in the face of that right.
Gary Edwards

Skynet rising: Google acquires 512-qubit quantum computer; NSA surveillance to be turne... - 0 views

  •  
    "The ultimate code breakers" If you know anything about encryption, you probably also realize that quantum computers are the secret KEY to unlocking all encrypted files. As I wrote about last year here on Natural News, once quantum computers go into widespread use by the NSA, the CIA, Google, etc., there will be no more secrets kept from the government. All your files - even encrypted files - will be easily opened and read. Until now, most people believed this day was far away. Quantum computing is an "impractical pipe dream," we've been told by scowling scientists and "flat Earth" computer engineers. "It's not possible to build a 512-qubit quantum computer that actually works," they insisted. Don't tell that to Eric Ladizinsky, co-founder and chief scientist of a company called D-Wave. Because Ladizinsky's team has already built a 512-qubit quantum computer. And they're already selling them to wealthy corporations, too. DARPA, Northrup Grumman and Goldman Sachs In case you're wondering where Ladizinsky came from, he's a former employee of Northrup Grumman Space Technology (yes, a weapons manufacturer) where he ran a multi-million-dollar quantum computing research project for none other than DARPA - the same group working on AI-driven armed assault vehicles and battlefield robots to replace human soldiers. .... When groundbreaking new technology is developed by smart people, it almost immediately gets turned into a weapon. Quantum computing will be no different. This technology grants God-like powers to police state governments that seek to dominate and oppress the People.  ..... Google acquires "Skynet" quantum computers from D-Wave According to an article published in Scientific American, Google and NASA have now teamed up to purchase a 512-qubit quantum computer from D-Wave. The computer is called "D-Wave Two" because it's the second generation of the system. The first system was a 128-qubit computer. Gen two
  •  
    Normally, I'd be suspicious of anything published by Infowars because its editors are willing to publish really over the top stuff, but: [i] this is subject matter I've maintained an interest in over the years and I was aware that working quantum computers were imminent; and [ii] the pedigree on this particular information does not trace to Scientific American, as stated in the article. I've known Scientific American to publish at least one soothing and lengthy article on the subject of chlorinated dioxin hazard -- my specialty as a lawyer was litigating against chemical companies that generated dioxin pollution -- that was generated by known closet chemical industry advocates long since discredited and was totally lacking in scientific validity and contrary to established scientific knowledge. So publication in Scientific American doesn't pack a lot of weight with me. But checking the Scientific American linked article, notes that it was reprinted by permission from Nature, a peer-reviewed scientific journal and news organization that I trust much more. That said, the InfoWars version is a rewrite that contains lots of information not in the Nature/Scientific American version of a sensationalist nature, so heightened caution is still in order. Check the reprinted Nature version before getting too excited: "The D-Wave computer is not a 'universal' computer that can be programmed to tackle any kind of problem. But scientists have found they can usefully frame questions in machine-learning research as optimisation problems. "D-Wave has battled to prove that its computer really operates on a quantum level, and that it is better or faster than a conventional computer. Before striking the latest deal, the prospective customers set a series of tests for the quantum computer. D-Wave hired an outside expert in algorithm-racing, who concluded that the speed of the D-Wave Two was above average overall, and that it was 3,600 times faster than a leading conventional comput
Gonzalo San Gil, PhD.

Pro-Privacy Senator Wyden on Fighting the NSA From Inside the System | WIRED - 1 views

  •  
    "Senator Ron Wyden thought he knew what was going on. The Democrat from Oregon, who has served on the Senate Select Committee on Intelligence since 2001, thought he knew the nature of the National Security Agency's surveillance activities. As a committee member with a classified clearance, he received regular briefings to conduct oversight."
  •  
    I'm a retired lawyer in Oregon and a devout civil libertarian. Wyden is one of my senators. I have been closely following this government digital surveillance stuff since the original articles in 1988 that first broke the story on the Five Eyes' Echelon surveillance system. E.g., http://goo.gl/mCxs6Y While I will grant that Wyden has bucked the system gently (he's far more a drag anchor than a propeller), he has shown no political courage on the NSA stuff whatsoever. In the linked article, he admits keeping his job as a Senator was more important to him than doing anything *effective* to stop the surveillance in its tracks. His "working from the inside" line notwithstanding, he allowed creation of a truly Orwellian state to develop without more than a few ineffective yelps that were never listened to because he lacked the courage to take a stand and bring down the house that NSA built with documentary evidence. It took a series of whistleblowers culminating in Edward Snowden's courageous willingness to spend the rest of his life in prison to bring the public to its currently educated state. Wyden on the other hand, didn't even have the courage to lay it all out in the public Congressional record when he could have done so at any time without risking more than his political career because of the Constitution's Speech and Debate Clause that absolutely protects Wyden from criminal prosecution had he done so. I don't buy arguments that fear of NSA blackmail can excuse politicians from doing their duty. That did not stop the Supreme Court from unanimously laying down an opinion, in Riley v. California, that brings to an end the line of case decisions based on Smith v. Maryland that is the underpinning of the NSA/DoJ position on access to phone metadata without a warrant. http://scholar.google.com/scholar_case?case=9647156672357738355 Elected and appointed government officials owe a duty to the citizens of this land to protect and defend the Constitution that legallh
Gary Edwards

Brendan's Roadmap Updates: Open letter to Microsoft's Chris Wilson and their fight to s... - 0 views

  • The history of ECMAScript since its beginnings in November 1996 shows that when Microsoft was behind in the market (against Netscape in 1996-1997), it moved aggressively in the standards body to evolve standards starting with ES1 through ES3. Once Microsoft dominated the market, the last edition of the standard was left to rot -- ES3 was finished in 1999 -- and even easy-to-fix standards conformance bugs in IE JScript went unfixed for eight years (so three years to go from Edition 1 to 3, then over eight to approach Edition 4). Now that the proposed 4th edition looks like a competitive threat, the world suddenly hears in detail about all those bugs, spun as differences afflicting "JavaScript" that should inform a new standard.
  • In my opinion the notion that we need to add features so that ajax programming would be easier is plain wrong. ajax is a hack and also the notion of a webapp is a hack. the web was created in a document centric view. All w3c standards are also based on the same document notion. The heart of the web, the HTTP protocol is designed to support a web of documents and as such is stateless. the proper solution, IMO, is not to evolve ES for the benefit of ajax and webapps, but rather generalize the notion of a document browser that connects to a web of documents to a general purpose client engine that connects to a network of internet applications. thus the current web (document) browser just becomes one such internet application.
  •  
    the obvious conflict of interest between the standards-based web and proprietary platforms advanced by Microsoft, and the rationales for keeping the web's client-side programming language small while the proprietary platforms rapidly evolve support for large languages, does not help maintain the fiction that only clashing high-level philosophies are involved here. Readers may not know that Ecma has no provision for "minor releases" of its standards, so any ES3.1 that was approved by TG1 would inevitably be given a whole edition number, presumably becoming the 4th Edition of ECMAScript. This is obviously contentious given all the years that the majority of TG1, sometimes even apparently including Microsoft representatives, has worked on ES4, and the developer expectations set by this long-standing effort. A history of Microsoft's post-ES3 involvement in the ECMAScript standard group, leading up to the overt split in TG1 in March, is summarized here. The history of ECMAScript since its beginnings in November 1996 shows that when Microsoft was behind in the market (against Netscape in 1996-1997), it moved aggressively in the standards body to evolve standards starting with ES1 through ES3. Once Microsoft dominated the market, the last edition of the standard was left to rot -- ES3 was finished in 1999 -- and even easy-to-fix standards conformance bugs in IE JScript went unfixed for eight years (so three years to go from Edition 1 to 3, then over eight to approach Edition 4). Now that the proposed 4th edition looks like a competitive threat, the world suddenly hears in detail about all those bugs, spun as differences afflicting "JavaScript" that should inform a new standard.
Gonzalo San Gil, PhD.

Peter Sunde: The 'Pirate Movement' is Dead | TorrentFreak - 0 views

  •  
    " Peter Sunde on April 4, 2015 C: 0 Opinion Ever since someone had the idea of starting a "pirate party," there've been discussions about the necessity for such a party. In the trail of that discussion, there's always been the one about whether the "pirate movement" is alive or not."
  •  
    " Peter Sunde on April 4, 2015 C: 0 Opinion Ever since someone had the idea of starting a "pirate party," there've been discussions about the necessity for such a party. In the trail of that discussion, there's always been the one about whether the "pirate movement" is alive or not."
Paul Merrell

Social Media Giants Choking Independent News Site Traffic to a Trickle - 0 views

  • Several prominent figures, including Web inventor Tim Berners-Lee, warned the EU Parliament that its proposed censorship measure would begin transforming the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users.
  • For much of the year, independent media has felt the sting of increased social media censorship, as the “revolving door” between U.S. intelligence agencies and social-media companies has manifested in a crackdown on news that challenges official government narratives. With many notable independent news websites having shut down since then as a result, those that remain afloat are being censored like never before, with social media traffic from Facebook and Twitter completely cut off in some cases. Among such websites, social media censorship by the most popular social networks is now widely regarded to be the worst it has ever been – a chilling reality for any who seek fact-based perspectives on major world events that differ from those to be found on well-known corporate-media outlets that consistently toe the government line. Last August, MintPress reported that a new Google algorithm targeting “fake news” had quashed traffic to many independent news and advocacy sites, with sites such as the American Civil Liberties Union, Democracy Now, and WikiLeaks, seeing their returns from Google searches experience massive drops. The World Socialist Website, one of the affected pages, reported a 67 percent decrease in Google returns while MintPress experienced an even larger decrease of 76 percent in Google search returns. The new algorithm targeted online publications on both sides of the political spectrum critical of U.S. imperialism, foreign wars, and other long-standing government policies. Now, less than a year later, the situation has become even more dire. Several independent media pages have reported that their social media traffic has sharply declined since March and – in some cases – stopped almost entirely since June began. For instance, independent media website Antimedia – a page with over 2 million likes and follows – saw its traffic drop from around 150,000 page views per day earlier this month to around 12,000 as of this week. As a reference, this time last year Antimedia’s traffic stood at nearly 300,000 a day.
Gonzalo San Gil, PhD.

Open source PDF readers, creators, and editors | Opensource.com - 0 views

  •  
    "Aren't we supposed to be living in a paperless world by now? I can't be the only person who imagined the office of the future, free from the confines of the eight and a half by eleven sheet (or A4, for my international friends), would have long since arrived. Instead, we've managed to land in an intermediate state of not paperless, but less paper. It could be worse."
Paul Merrell

Security Experts Oppose Government Access to Encrypted Communication - The New York Times - 0 views

  • An elite group of security technologists has concluded that the American and British governments cannot demand special access to encrypted communications without putting the world’s most confidential data and critical infrastructure in danger.A new paper from the group, made up of 14 of the world’s pre-eminent cryptographers and computer scientists, is a formidable salvo in a skirmish between intelligence and law enforcement leaders, and technologists and privacy advocates. After Edward J. Snowden’s revelations — with security breaches and awareness of nation-state surveillance at a record high and data moving online at breakneck speeds — encryption has emerged as a major issue in the debate over privacy rights.
  • That has put Silicon Valley at the center of a tug of war. Technology companies including Apple, Microsoft and Google have been moving to encrypt more of their corporate and customer data after learning that the National Security Agency and its counterparts were siphoning off digital communications and hacking into corporate data centers.
  • Yet law enforcement and intelligence agency leaders argue that such efforts thwart their ability to monitor kidnappers, terrorists and other adversaries. In Britain, Prime Minister David Cameron threatened to ban encrypted messages altogether. In the United States, Michael S. Rogers, the director of the N.S.A., proposed that technology companies be required to create a digital key to unlock encrypted data, but to divide the key into pieces and secure it so that no one person or government agency could use it alone.The encryption debate has left both sides bitterly divided and in fighting mode. The group of cryptographers deliberately issued its report a day before James B. Comey Jr., the director of the Federal Bureau of Investigation, and Sally Quillian Yates, the deputy attorney general at the Justice Department, are scheduled to testify before the Senate Judiciary Committee on the concerns that they and other government agencies have that encryption technologies will prevent them from effectively doing their jobs.
  • ...2 more annotations...
  • The new paper is the first in-depth technical analysis of government proposals by leading cryptographers and security thinkers, including Whitfield Diffie, a pioneer of public key cryptography, and Ronald L. Rivest, the “R” in the widely used RSA public cryptography algorithm. In the report, the group said any effort to give the government “exceptional access” to encrypted communications was technically unfeasible and would leave confidential data and critical infrastructure like banks and the power grid at risk. Handing governments a key to encrypted communications would also require an extraordinary degree of trust. With government agency breaches now the norm — most recently at the United States Office of Personnel Management, the State Department and the White House — the security specialists said authorities could not be trusted to keep such keys safe from hackers and criminals. They added that if the United States and Britain mandated backdoor keys to communications, China and other governments in foreign markets would be spurred to do the same.
  • “Such access will open doors through which criminals and malicious nation-states can attack the very individuals law enforcement seeks to defend,” the report said. “The costs would be substantial, the damage to innovation severe and the consequences to economic growth hard to predict. The costs to the developed countries’ soft power and to our moral authority would also be considerable.”
  •  
    Our system of government does not expect that every criminal will be apprehended and convicted. There are numerous values our society believes are more important. Some examples: [i] a presumption of innocence unless guilt is established beyond any reasonable doubt; [ii] the requirement that government officials convince a neutral magistrate that they have probable cause to believe that a search or seizure will produce evidence of a crime; [iii] many communications cannot be compelled to be disclosed and used in evidence, such as attorney-client communications, spousal communications, and priest-penitent communications; and [iv] etc. Moral of my story: the government needs a much stronger reason to justify interception of communications than saying, "some crooks will escape prosecution if we can't do that." We have a right to whisper to each other, concealing our communicatons from all others. Why does the right to whisper privately disappear if our whisperings are done electronically? The Supreme Court took its first step on a very slippery slope when it permitted wiretapping in Olmstead v. United States, 277 U.S. 438, 48 S. Ct. 564, 72 L. Ed. 944 (1928). https://goo.gl/LaZGHt It's been a long slide ever since. It's past time to revisit Olmstead and recognize that American citizens have the absolute right to communicate privately. "The President … recognizes that U.S. citizens and institutions should have a reasonable expectation of privacy from foreign or domestic intercept when using the public telephone system." - Brent Scowcroft, U.S. National Security Advisor, National Security Decision Memorandum 338 (1 September 1976) (Nixon administration), http://www.fas.org/irp/offdocs/nsdm-ford/nsdm-338.pdf   
Fabien Cadet

The Evolution of Web Design (infographic) | kissmetrics.com, 2011-04? by Sean Work - 3 views

  •  
    « Can you believe that the first published website is already 20 years old? Web design has come a long way since the first website was published by Tim Berners-Lee in 1991. This infographic is a peek at the evolutionary tale of web design, which is ironically still in its infant stages. Enjoy the infographic below and let your imagination wander. You might find yourself asking, "Where will web design be in the next 20 years?" »
Gonzalo San Gil, PhD.

Why Facebook Just Launched Its Own 'Dark Web' Site | WIRED [+ TOR IS THE NSA http://lwn... - 2 views

  •  
    "Facebook has never had much of a reputation for letting users hide their identities online. But now the world's least anonymous website has just joined the Web's most anonymous network." [# ! Just a #PR #Campaign… # ! … as, You'll learn soon… TOR IS THE NSA Posted Jul 9, 2008 21:13 UTC (Wed) by dulles (guest, #45450) Parent article: GNU/Linux free software tools to preserve your online privacy, anonymity and security (FSM) # ! Anyway, since long ago, You Must Know that there is no privacy in # ! a Network built by others -Governments and Big Companies # ! among 'em. # ! Don' come to The Web expecting privacy, as You won't look for # ! intimacy in a Stadium Full of Pe@ple… # ! … but meet the places You get in.]
  •  
    "Facebook has never had much of a reputation for letting users hide their identities online. But now the world's least anonymous website has just joined the Web's most anonymous network."
Gonzalo San Gil, PhD.

Children Are Leading The Cord Cutting Revolution | Techdirt - 1 views

  •  
    "from the mikey-doesn't-like-it dept The cable industry has long pretended that the cord cutting phenomenon either isn't real or that the only people cutting the cord are aging losers living in their parents' basement. Of course when you actually look at the data, while cord cutting remains a slow but growing phenomenon, most of the cord cutters are young, highly educated, employed, and make a good amount of money." [# ! … #Freedom from The #Start: #Good #Move. # ! Notwithstanding '#OldSchool' #manipulators' #criticism…]
  •  
    "from the mikey-doesn't-like-it dept The cable industry has long pretended that the cord cutting phenomenon either isn't real or that the only people cutting the cord are aging losers living in their parents' basement. Of course when you actually look at the data, while cord cutting remains a slow but growing phenomenon, most of the cord cutters are young, highly educated, employed, and make a good amount of money."
Paul Merrell

Free At Last: New DMCA Rules Might Make the Web a Better Place | nsnbc international - 0 views

  • David Mao, the Librarian of Congress, has issued new rules pertaining to exemptions to the Digital Millennium Copyright Act (DMCA) after a 3 year battle that was expedited in the wake of the Volkswagen scandal.
  • Opposition to this new decision is coming from the Environmental Protection Agency (EPA) and the auto industry because the DMCA prohibits “circumventing encryption or access controls to copy or modify copyrighted works.” For example, GM “claimed the exemption ‘could introduce safety and security issues as well as facilitate violation of various laws designed specifically to regulate the modern car, including emissions, fuel economy, and vehicle safety regulations’.” The exemption in question is in Section 1201 which forbids the unlocking of software access controls which has given the auto industry the unique ability to “threaten legal action against anyone who needs to get around those restrictions, no matter how legitimate the reason.” Journalist Nick Statt points out that this provision “made it illegal in the past to unlock your smartphone from its carrier or even to share your HBO Go password with a friend. It’s designed to let corporations protect copyrighted material, but it allows them to crackdown on circumventions even when they’re not infringing on those copyrights or trying to access or steal proprietary information.”
  • Kit Walsh, staff attorney for the Electronic Frontier Foundation (EFF), explained that the “‘access control’ rule is supposed to protect against unlawful copying. But as we’ve seen in the recent Volkswagen scandal—where VW was caught manipulating smog tests—it can be used instead to hide wrongdoing hidden in computer code.” Walsh continued: “We are pleased that analysts will now be able to examine the software in the cars we drive without facing legal threats from car manufacturers, and that the Librarian has acted to promote competition in the vehicle aftermarket and protect the long tradition of vehicle owners tinkering with their cars and tractors. The year-long delay in implementing the exemptions, though, is disappointing and unjustified. The VW smog tests and a long run of security vulnerabilities have shown researchers and drivers need the exemptions now.” As part of the new changes, gamers can “modify an old video game so it doesn’t perform a check with an authentication server that has since been shut down” and after the publisher cuts of support for the video game.
  • ...1 more annotation...
  • Another positive from the change is that smartphone users will be able to jailbreak their phone and finally enjoy running operating systems and applications from any source, not just those approved by the manufacturer. And finally, those who remix excerpts from DVDs, Blu – Ray discs or downloading services will be allowed to mix the material into theirs without violating the DMCA.
Paul Merrell

XKeyscore Exposé Reaffirms the Need to Rid the Web of Tracking Cookies | Elec... - 0 views

  • The Intercept published an expose on the NSA's XKeyscore program. Along with information on the breadth and scale of the NSA's metadata collection, The Intercept revealed how the NSA relies on unencrypted cookie data to identify users. As The Intercept says: "The NSA’s ability to piggyback off of private companies’ tracking of their own users is a vital instrument that allows the agency to trace the data it collects to individual users. It makes no difference if visitors switch to public Wi-Fi networks or connect to VPNs to change their IP addresses: the tracking cookie will follow them around as long as they are using the same web browser and fail to clear their cookies." The NSA slides released by The Intercept give detailed guides to understanding the data transmitted by these cookies, as well as how to find unique machine identifiers that analysts can use to differentiate between multiple machines using the same IP address. We've written before about how spy agencies piggyback on social media account data to find Internet users' names or other identifying info, and these slides drive home the point that HTTP cookies leave users vulnerable to government surveillance, since any intermediary (or spy agency) can read the sensitive data they contain.
  • Worse yet, most of the time these identifying cookies come from third-party sources on webpages, and users have no meaningful way to opt out of receiving them (short of blocking all third party cookies) since advertisers (the main server of these types of cookies) refuse to honor the Do Not Track header.  Browser makers could help address this sort of non-consensual tracking by both advertisers and the NSA with some simple technical changes—changes that have been shown to reduce the number of third party cookies received by 67%. So far, though, they've been unwilling to build privacy protecting features in by default. Until they do, the best way for users to protect themselves is by installing a privacy protecting app like Privacy Badger, which is designed to block these types of uniquely identifying tracking cookies, or HTTPS Everywhere to block the transmission of HTTP cookies.
Paul Merrell

Revealed: How DOJ Gagged Google over Surveillance of WikiLeaks Volunteer - The Intercept - 0 views

  • The Obama administration fought a legal battle against Google to secretly obtain the email records of a security researcher and journalist associated with WikiLeaks. Newly unsealed court documents obtained by The Intercept reveal the Justice Department won an order forcing Google to turn over more than one year’s worth of data from the Gmail account of Jacob Appelbaum (pictured above), a developer for the Tor online anonymity project who has worked with WikiLeaks as a volunteer. The order also gagged Google, preventing it from notifying Appelbaum that his records had been provided to the government. The surveillance of Appelbaum’s Gmail account was tied to the Justice Department’s long-running criminal investigation of WikiLeaks, which began in 2010 following the transparency group’s publication of a large cache of U.S. government diplomatic cables. According to the unsealed documents, the Justice Department first sought details from Google about a Gmail account operated by Appelbaum in January 2011, triggering a three-month dispute between the government and the tech giant. Government investigators demanded metadata records from the account showing email addresses of those with whom Appelbaum had corresponded between the period of November 2009 and early 2011; they also wanted to obtain information showing the unique IP addresses of the computers he had used to log in to the account.
  • The Justice Department argued in the case that Appelbaum had “no reasonable expectation of privacy” over his email records under the Fourth Amendment, which protects against unreasonable searches and seizures. Rather than seeking a search warrant that would require it to show probable cause that he had committed a crime, the government instead sought and received an order to obtain the data under a lesser standard, requiring only “reasonable grounds” to believe that the records were “relevant and material” to an ongoing criminal investigation. Google repeatedly attempted to challenge the demand, and wanted to immediately notify Appelbaum that his records were being sought so he could have an opportunity to launch his own legal defense. Attorneys for the tech giant argued in a series of court filings that the government’s case raised “serious First Amendment concerns.” They noted that Appelbaum’s records “may implicate journalistic and academic freedom” because they could “reveal confidential sources or information about WikiLeaks’ purported journalistic or academic activities.” However, the Justice Department asserted that “journalists have no special privilege to resist compelled disclosure of their records, absent evidence that the government is acting in bad faith,” and refused to concede Appelbaum was in fact a journalist. It claimed it had acted in “good faith throughout this criminal investigation, and there is no evidence that either the investigation or the order is intended to harass the … subscriber or anyone else.” Google’s attempts to fight the surveillance gag order angered the government, with the Justice Department stating that the company’s “resistance to providing the records” had “frustrated the government’s ability to efficiently conduct a lawful criminal investigation.”
  • Google accused the government of hyperbole and argued that the backlash over the Twitter order did not justify secrecy related to the Gmail surveillance. “Rather than demonstrating how unsealing the order will harm its well-publicized investigation, the government lists a parade of horribles that have allegedly occurred since it unsealed the Twitter order, yet fails to establish how any of these developments could be further exacerbated by unsealing this order,” wrote Google’s attorneys. “The proverbial toothpaste is out of the tube, and continuing to seal a materially identical order will not change it.” But Google’s attempt to overturn the gag order was denied by magistrate judge Ivan D. Davis in February 2011. The company launched an appeal against that decision, but this too was rebuffed, in March 2011, by District Court judge Thomas Selby Ellis, III.
  • ...4 more annotations...
  • The Justice Department wanted to keep the surveillance secret largely because of an earlier public backlash over its WikiLeaks investigation. In January 2011, Appelbaum and other WikiLeaks volunteers’ – including Icelandic parlimentarian Birgitta Jonsdottir – were notified by Twitter that the Justice Department had obtained data about their accounts. This disclosure generated widepread news coverage and controversy; the government says in the unsealed court records that it “failed to anticipate the degree of  damage that would be caused” by the Twitter disclosure and did not want to “exacerbate this problem” when it went after Appelbaum’s Gmail data. The court documents show the Justice Department said the disclosure of its Twitter data grab “seriously jeopardized the [WikiLeaks] investigation” because it resulted in efforts to “conceal evidence” and put public pressure on other companies to resist similar surveillance orders. It also claimed that officials named in the subpeona ordering Twitter to turn over information were “harassed” after a copy was published by Intercept co-founder Glenn Greenwald at Salon in 2011. (The only specific evidence of the alleged harassment cited by the government is an email that was sent to an employee of the U.S. Attorney’s office that purportedly said: “You guys are fucking nazis trying to controll [sic] the whole fucking world. Well guess what. WE DO NOT FORGIVE. WE DO NOT FORGET. EXPECT US.”)
  • The government agreed to unseal some of the court records on Apr. 1 this year, and they were apparently turned over to Appelbaum on May 14 through a notification sent to his Gmail account. The files were released on condition that they would contain some redactions, which are bizarre and inconsistent, in some cases censoring the name of “WikiLeaks” from cited public news reports. Not all of the documents in the case – such as the original surveillance orders contested by Google – were released as part of the latest disclosure. Some contain “specific and sensitive details of the investigation” and “remain properly sealed while the grand jury investigation continues,” according to the court records from April this year. Appelbaum, an American citizen who is based in Berlin, called the case “a travesty that continues at a slow pace” and said he felt it was important to highlight “the absolute madness in these documents.”
  • He told The Intercept: “After five years, receiving such legal documents is neither a shock nor a needed confirmation. … Will we ever see the full documents about our respective cases? Will we even learn the names of those signing so-called legal orders against us in secret sealed documents? Certainly not in a timely manner and certainly not in a transparent, just manner.” The 32-year-old, who has recently collaborated with Intercept co-founder Laura Poitras to report revelations about National Security Agency surveillance for German news magazine Der Spiegel, said he plans to remain in Germany “in exile, rather than returning to the U.S. to experience more harassment of a less than legal kind.”
  • “My presence in Berlin ensures that the cost of physically harassing me or politically harassing me is much higher than when I last lived on U.S. soil,” Appelbaum said. “This allows me to work as a journalist freely from daily U.S. government interference. It also ensures that any further attempts to continue this will be forced into the open through [a Mutal Legal Assistance Treaty] and other international processes. The German goverment is less likely to allow the FBI to behave in Germany as they do on U.S. soil.” The Justice Department’s WikiLeaks investigaton is headed by prosecutors in the Eastern District of Virginia. Since 2010, the secretive probe has seen activists affiliated with WikiLeaks compelled to appear before a grand jury and the FBI attempting to infiltrate the group with an informant. Earlier this year, it was revealed that the government had obtained the contents of three core WikiLeaks staffers’ Gmail accounts as part of the investigation.
Paul Merrell

For sale: Systems that can secretly track where cellphone users go around the globe - T... - 0 views

  • Makers of surveillance systems are offering governments across the world the ability to track the movements of almost anybody who carries a cellphone, whether they are blocks away or on another continent. The technology works by exploiting an essential fact of all cellular networks: They must keep detailed, up-to-the-minute records on the locations of their customers to deliver calls and other services to them. Surveillance systems are secretly collecting these records to map people’s travels over days, weeks or longer, according to company marketing documents and experts in surveillance technology.
  • The world’s most powerful intelligence services, such as the National Security Agency and Britain’s GCHQ, long have used cellphone data to track targets around the globe. But experts say these new systems allow less technically advanced governments to track people in any nation — including the United States — with relative ease and precision.
  • It is unclear which governments have acquired these tracking systems, but one industry official, speaking on the condition of anonymity to share sensitive trade information, said that dozens of countries have bought or leased such technology in recent years. This rapid spread underscores how the burgeoning, multibillion-dollar surveillance industry makes advanced spying technology available worldwide. “Any tin-pot dictator with enough money to buy the system could spy on people anywhere in the world,” said Eric King, deputy director of Privacy International, a London-based activist group that warns about the abuse of surveillance technology. “This is a huge problem.”
  • ...9 more annotations...
  • Security experts say hackers, sophisticated criminal gangs and nations under sanctions also could use this tracking technology, which operates in a legal gray area. It is illegal in many countries to track people without their consent or a court order, but there is no clear international legal standard for secretly tracking people in other countries, nor is there a global entity with the authority to police potential abuses.
  • tracking systems that access carrier location databases are unusual in their ability to allow virtually any government to track people across borders, with any type of cellular phone, across a wide range of carriers — without the carriers even knowing. These systems also can be used in tandem with other technologies that, when the general location of a person is already known, can intercept calls and Internet traffic, activate microphones, and access contact lists, photos and other documents. Companies that make and sell surveillance technology seek to limit public information about their systems’ capabilities and client lists, typically marketing their technology directly to law enforcement and intelligence services through international conferences that are closed to journalists and other members of the public.
  • Yet marketing documents obtained by The Washington Post show that companies are offering powerful systems that are designed to evade detection while plotting movements of surveillance targets on computerized maps. The documents claim system success rates of more than 70 percent. A 24-page marketing brochure for SkyLock, a cellular tracking system sold by Verint, a maker of analytics systems based in Melville, N.Y., carries the subtitle “Locate. Track. Manipulate.” The document, dated January 2013 and labeled “Commercially Confidential,” says the system offers government agencies “a cost-effective, new approach to obtaining global location information concerning known targets.”
  • (Privacy International has collected several marketing brochures on cellular surveillance systems, including one that refers briefly to SkyLock, and posted them on its Web site. The 24-page SkyLock brochure and other material was independently provided to The Post by people concerned that such systems are being abused.)
  • Verint, which also has substantial operations in Israel, declined to comment for this story. It says in the marketing brochure that it does not use SkyLock against U.S. or Israeli phones, which could violate national laws. But several similar systems, marketed in recent years by companies based in Switzerland, Ukraine and elsewhere, likely are free of such limitations.
  • The tracking technology takes advantage of the lax security of SS7, a global network that cellular carriers use to communicate with one another when directing calls, texts and Internet data. The system was built decades ago, when only a few large carriers controlled the bulk of global phone traffic. Now thousands of companies use SS7 to provide services to billions of phones and other mobile devices, security experts say. All of these companies have access to the network and can send queries to other companies on the SS7 system, making the entire network more vulnerable to exploitation. Any one of these companies could share its access with others, including makers of surveillance systems.
  • Companies that market SS7 tracking systems recommend using them in tandem with “IMSI catchers,” increasingly common surveillance devices that use cellular signals collected directly from the air to intercept calls and Internet traffic, send fake texts, install spyware on a phone, and determine precise locations. IMSI catchers — also known by one popular trade name, StingRay — can home in on somebody a mile or two away but are useless if a target’s general location is not known. SS7 tracking systems solve that problem by locating the general area of a target so that IMSI catchers can be deployed effectively. (The term “IMSI” refers to a unique identifying code on a cellular phone.)
  • Verint can install SkyLock on the networks of cellular carriers if they are cooperative — something that telecommunications experts say is common in countries where carriers have close relationships with their national governments. Verint also has its own “worldwide SS7 hubs” that “are spread in various locations around the world,” says the brochure. It does not list prices for the services, though it says that Verint charges more for the ability to track targets in many far-flung countries, as opposed to only a few nearby ones. Among the most appealing features of the system, the brochure says, is its ability to sidestep the cellular operators that sometimes protect their users’ personal information by refusing government requests or insisting on formal court orders before releasing information.
  • Another company, Defentek, markets a similar system called Infiltrator Global Real-Time Tracking System on its Web site, claiming to “locate and track any phone number in the world.” The site adds: “It is a strategic solution that infiltrates and is undetected and unknown by the network, carrier, or the target.”
  •  
    The Verint company has very close ties to the Iraeli government. Its former parent company Comverse, was heavily subsidized by Israel and the bulk of its manufacturing and code development was done in Israel. See https://en.wikipedia.org/wiki/Comverse_Technology "In December 2001, a Fox News report raised the concern that wiretapping equipment provided by Comverse Infosys to the U.S. government for electronic eavesdropping may have been vulnerable, as these systems allegedly had a back door through which the wiretaps could be intercepted by unauthorized parties.[55] Fox News reporter Carl Cameron said there was no reason to believe the Israeli government was implicated, but that "a classified top-secret investigation is underway".[55] A March 2002 story by Le Monde recapped the Fox report and concluded: "Comverse is suspected of having introduced into its systems of the 'catch gates' in order to 'intercept, record and store' these wire-taps. This hardware would render the 'listener' himself 'listened to'."[56] Fox News did not pursue the allegations, and in the years since, there have been no legal or commercial actions of any type taken against Comverse by the FBI or any other branch of the US Government related to data access and security issues. While no real evidence has been presented against Comverse or Verint, the allegations have become a favorite topic of conspiracy theorists.[57] By 2005, the company had $959 million in sales and employed over 5,000 people, of whom about half were located in Israel.[16]" Verint is also the company that got the Dept. of Homeland Security contract to provide and install an electronic and video surveillance system across the entire U.S. border with Mexico.  One need not be much of a conspiracy theorist to have concerns about Verint's likely interactions and data sharing with the NSA and its Israeli equivalent, Unit 8200. 
Paul Merrell

U.S. Says It Spied on 89,000 Targets Last Year, But the Number Is Deceptive | Threat Le... - 0 views

  • About 89,000 foreigners or organizations were targeted for spying under a U.S. surveillance order last year, according to a new transparency report. The report was released for the first time Friday by the Office of the Director of Intelligence, upon order of the president, in the wake of surveillance leaks by NSA whistleblower Edward Snowden. But the report, which covers only surveillance orders issued in 2013, doesn’t tell the whole story about how many individuals the spying targeted or how many Americans were caught in the surveillance that targeted foreigners. Civil liberties groups say the real number is likely “orders of magnitude” larger than this. “Even if it was an honest definition of ‘target’—that is, an individual instead of a group—that also is not encompassing those who are ancillary to a target and are caught up in the dragnet,” says Kurt Opsahl, deputy general counsel of the Electronic Frontier Foundation.
  • The report, remarkably, shows that the government obtained just one order last year under Section 702 of FISA—which allows for bulk collection of data on foreigners—and that this one order covered 89,138 targets. But, as the report notes, “target” can refer to “an individual person, a group, an organization composed of multiple individuals or a foreign power that possesses or is likely to communicate foreign intelligence information.” Furthermore, Section 702 orders are actually certificates issued by the FISA Court that can cover surveillance of an entire facility. And since, as the government points out in its report, the government cannot know how many people use a facility, the figure only “reflects an estimate of the number of known users of particular facilities (sometimes referred to as selectors) subject to intelligence collection under those Certifications,” the report notes.
  • “If you’re actually trying to get a sense of the number of human beings affected or the number of Americans affected, the number of people affected is vastly, vastly larger,” says Julian Sanchez, senior fellow at the Cato Institute. “And how many of those are Americans is impossible to say. But [although] you may not think you are routinely communicating with foreign persons, [this] is not any kind of assurance that your communications are not part of the traffic subject to interception.” Sanchez points out that each individual targeted is likely communicating with dozens or hundred of others, whose communications will be picked up in the surveillance. “And probably a lot of these targets are not individuals but entire web sites or companies. While [a company like the Chinese firm] Huawei might be a target, thousands of emails used by thousands of employees will be swept up.” How many of those employees might be American or communicating with Americans is unknown.
  • ...5 more annotations...
  • Also revealed in today’s report is the number of times the government has queried the controversial phone records database it created by collecting the phone records of every subscriber from U.S. providers. According to the report, the government used 423 “selectors” to search its massive phone records database, which includes records going back to at least 2006 when the program began. A search involves querying a specific phone number or device ID that appears in the database. The government has long maintained that its collection of phone records isn’t a violation of its authority, since it only views the records of specific individuals targeted in an investigation. But such searches, even if targeted at phone numbers used by foreigners, would include calls made to and from Americans as well as calls exchanged with people two or three hops out from the targeted number.
  • In its report, the government indicated that the 423 selectors involved just 248 “known or presumed” Americans whose information was collected by the agency in the database. But Opsahl says that both of these numbers are deceptive given what we know about the database and how it’s been used. “We know it’s affecting millions of people,” he points out. But “then we have estimated numbers of affected people [that are just] in the three digits. That requires some effort [on the government's part] to find a way to do the definition of the number [in such a way] to make it as small as possible.”
  • One additional figure today’s report covers is the number of National Security Letters the government issued last year to businesses to obtain data on accountholders and users—19,212. NSLs are written demands from the FBI that compel internet service providers, credit companies, financial institutions and others to hand over confidential records about their customers, such as subscriber information, phone numbers and e-mail addresses, websites visited, and more. These letters are a powerful tool because they do not require court approval, and they come with a built-in gag order, preventing recipients from disclosing to anyone that they have received an NSL. An FBI agent looking into a possible anti-terrorism case can self-issue an NSL to a credit bureau, ISP, or phone company with only the sign-off of the Special Agent in Charge of their office. The FBI has merely to assert that the information is “relevant” to an investigation into international terrorism or clandestine intelligence activities.
  • The FBI has issued hundreds of thousands of NSLs over the years and has been reprimanded for abusing them. Last year a federal judge ruled that the use of NSLs is unconstitutional, due to the gag order that accompanies them, and ordered the government to stop using them. Her ruling, however, was stayed pending the government’s appeal.
  • According to the government’s report today, the 19,000 NSLs issued last year involved more than 38,000 requests for information.
Paul Merrell

Censorship in the Age of Large Cloud Providers - Lawfare - 2 views

  • Internet censors have a new strategy in their bid to block applications and websites: pressuring the large cloud providers that host them. These providers have concerns that are much broader than the targets of censorship efforts, so they have the choice of either standing up to the censors or capitulating in order to maximize their business. Today’s internet largely reflects the dominance of a handful of companies behind the cloud services, search engines and mobile platforms that underpin the technology landscape. This new centralization radically tips the balance between those who want to censor parts of the internet and those trying to evade censorship. When the profitable answer is for a software giant to acquiesce to censors' demands, how long can internet freedom last? The recent battle between the Russian government and the Telegram messaging app illustrates one way this might play out. Russia has been trying to block Telegram since April, when a Moscow court banned it after the company refused to give Russian authorities access to user messages. Telegram, which is widely used in Russia, works on both iPhone and Android, and there are Windows and Mac desktop versions available. The app offers optional end-to-end encryption, meaning that all messages are encrypted on the sender's phone and decrypted on the receiver's phone; no part of the network can eavesdrop on the messages. Since then, Telegram has been playing cat-and-mouse with the Russian telecom regulator Roskomnadzor by varying the IP address the app uses to communicate. Because Telegram isn't a fixed website, it doesn't need a fixed IP address. Telegram bought tens of thousands of IP addresses and has been quickly rotating through them, staying a step ahead of censors. Cleverly, this tactic is invisible to users. The app never sees the change, or the entire list of IP addresses, and the censor has no clear way to block them all. A week after the court ban, Roskomnadzor countered with an unprecedented move of its own: blocking 19 million IP addresses, many on Amazon Web Services and Google Cloud. The collateral damage was widespread: The action inadvertently broke many other web services that use those platforms, and Roskomnadzor scaled back after it became clear that its action had affected services critical for Russian business. Even so, the censor is still blocking millions of IP addresses.
Paul Merrell

NSA Spying Relies on AT&T's 'Extreme Willingness to Help' - ProPublica - 0 views

  • he National Security Agency’s ability to spy on vast quantities of Internet traffic passing through the United States has relied on its extraordinary, decades-long partnership with a single company: the telecom giant AT&T. While it has been long known that American telecommunications companies worked closely with the spy agency, newly disclosed NSA documents show that the relationship with AT&T has been considered unique and especially productive. One document described it as “highly collaborative,” while another lauded the company’s “extreme willingness to help.”
  • AT&T’s cooperation has involved a broad range of classified activities, according to the documents, which date from 2003 to 2013. AT&T has given the NSA access, through several methods covered under different legal rules, to billions of emails as they have flowed across its domestic networks. It provided technical assistance in carrying out a secret court order permitting the wiretapping of all Internet communications at the United Nations headquarters, a customer of AT&T. The NSA’s top-secret budget in 2013 for the AT&T partnership was more than twice that of the next-largest such program, according to the documents. The company installed surveillance equipment in at least 17 of its Internet hubs on American soil, far more than its similarly sized competitor, Verizon. And its engineers were the first to try out new surveillance technologies invented by the eavesdropping agency. One document reminds NSA officials to be polite when visiting AT&T facilities, noting: “This is a partnership, not a contractual relationship.” The documents, provided by the former agency contractor Edward Snowden, were jointly reviewed by The New York Times and ProPublica.
  • It is not clear if the programs still operate in the same way today. Since the Snowden revelations set off a global debate over surveillance two years ago, some Silicon Valley technology companies have expressed anger at what they characterize as NSA intrusions and have rolled out new encryption to thwart them. The telecommunications companies have been quieter, though Verizon unsuccessfully challenged a court order for bulk phone records in 2014. At the same time, the government has been fighting in court to keep the identities of its telecom partners hidden. In a recent case, a group of AT&T customers claimed that the NSA’s tapping of the Internet violated the Fourth Amendment protection against unreasonable searches. This year, a federal judge dismissed key portions of the lawsuit after the Obama administration argued that public discussion of its telecom surveillance efforts would reveal state secrets, damaging national security.
Paul Merrell

Rural America and the 5G Digital Divide. Telecoms Expanding Their "Toxic Infrastructure... - 0 views

  • While there is considerable telecom hubris regarding the 5G rollout and increasing speculation that the next generation of wireless is not yet ready for Prime Time, the industry continues to make promises to Rural America that it has no intention of fulfilling. Decades-long promises to deliver digital Utopia to rural America by T-Mobile, Verizon and AT&T have never materialized.  
  • In 2017, the USDA reported that 29% of American farms had no internet access. The FCC says that 14 million rural Americans and 1.2 million Americans living on tribal lands do not have 4G LTE on their phones, and that 30 million rural residents do not have broadband service compared to 2% of urban residents.  It’s beginning to sound like a Third World country. Despite an FCC $4.5 billion annual subsidy to carriers to provide broadband service in rural areas, the FCC reports that ‘over 24 million Americans do not have access to high-speed internet service, the bulk of them in rural area”while a  Microsoft Study found that  “162 million people across the US do not have internet service at broadband speeds.” At the same time, only three cable companies have access to 70% of the market in a sweetheart deal to hike rates as they avoid competition and the FCC looks the other way.  The FCC believes that it would cost $40 billion to bring broadband access to 98% of the country with expansion in rural America even more expensive.  While the FCC has pledged a $2 billion, ten year plan to identify rural wireless locations, only 4 million rural American businesses and homes will be targeted, a mere drop in the bucket. Which brings us to rural mapping: Since the advent of the digital age, there have been no accurate maps identifying where broadband service is available in rural America and where it is not available.  The FCC has a long history of promulgating unreliable and unverified carrier-provided numbers as the Commission has repeatedly ‘bungled efforts to produce accurate broadband maps” that would have facilitated rural coverage. During the Senate Commerce Committee hearing on April 10th regarding broadband mapping, critical testimony questioned whether the FCC and/or the telecom industry have either the commitment or the proficiency to provide 5G to rural America.  Members of the Committee shared concerns that 5G might put rural America further behind the curve so as to never catch up with the rest of the country
1 - 20 of 37 Next ›
Showing 20 items per page