Skip to main content

Home/ Future of the Web/ Group items tagged been

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

Why Kim Dotcom hasn't been extradited 3 years after the US smashed Megaupload | Ars Tec... - 0 views

  •  
    "Why Kim Dotcom hasn't been extradited 3 years after the US smashed Megaupload An extradition hearing is set for June 2015. Based on history, don't hold your breath. by Cyrus Farivar - Jan 18, 2015 10:30 pm UTC Share Tweet 37 Kim Dotcom made his initial play for the Billboard charts in late 2011. Kim Dotcom has never been shy. And in December 2011, roughly a month before things for Dotcom were set to drastically change, he still oozed with bravado: Dotcom released a song ("The Megaupload Song") in conjunction with producer Printz Board. It featured a number of major pop stars-including the likes of Kanye West, Jamie Foxx, and Serena Williams-all singing that they "love Megaupload."" [ # ! this is not an #IntellectualProperty #enforcement #issue, # ! it is a bunch of #governments (& friend companies) saying # ! #citizens that #information & #culture belong to '#Them'... # ! ;/ # ! ... and this doesn't work this way. # ! :) ]
  •  
    " Why Kim Dotcom hasn't been extradited 3 years after the US smashed Megaupload An extradition hearing is set for June 2015. Based on history, don't hold your breath. by Cyrus Farivar - Jan 18, 2015 10:30 pm UTC Share Tweet 37 Kim Dotcom made his initial play for the Billboard charts in late 2011. Kim Dotcom has never been shy. And in December 2011, roughly a month before things for Dotcom were set to drastically change, he still oozed with bravado: Dotcom released a song ("The Megaupload Song") in conjunction with producer Printz Board. It featured a number of major pop stars-including the likes of Kanye West, Jamie Foxx, and Serena Williams-all singing that they "love Megaupload.""
Paul Merrell

The Digital Hunt for Duqu, a Dangerous and Cunning U.S.-Israeli Spy Virus - The Intercept - 1 views

  • “Is this related to what we talked about before?” Bencsáth said, referring to a previous discussion they’d had about testing new services the company planned to offer customers. “No, something else,” Bartos said. “Can you come now? It’s important. But don’t tell anyone where you’re going.” Bencsáth wolfed down the rest of his lunch and told his colleagues in the lab that he had a “red alert” and had to go. “Don’t ask,” he said as he ran out the door. A while later, he was at Bartos’ office, where a triage team had been assembled to address the problem they wanted to discuss. “We think we’ve been hacked,” Bartos said.
  • They found a suspicious file on a developer’s machine that had been created late at night when no one was working. The file was encrypted and compressed so they had no idea what was inside, but they suspected it was data the attackers had copied from the machine and planned to retrieve later. A search of the company’s network found a few more machines that had been infected as well. The triage team felt confident they had contained the attack but wanted Bencsáth’s help determining how the intruders had broken in and what they were after. The company had all the right protections in place—firewalls, antivirus, intrusion-detection and -prevention systems—and still the attackers got in.
  • Bencsáth was a teacher, not a malware hunter, and had never done such forensic work before. At the CrySyS Lab, where he was one of four advisers working with a handful of grad students, he did academic research for the European Union and occasional hands-on consulting work for other clients, but the latter was mostly run-of-the-mill cleanup work—mopping up and restoring systems after random virus infections. He’d never investigated a targeted hack before, let alone one that was still live, and was thrilled to have the chance. The only catch was, he couldn’t tell anyone what he was doing. Bartos’ company depended on the trust of customers, and if word got out that the company had been hacked, they could lose clients. The triage team had taken mirror images of the infected hard drives, so they and Bencsáth spent the rest of the afternoon poring over the copies in search of anything suspicious. By the end of the day, they’d found what they were looking for—an “infostealer” string of code that was designed to record passwords and other keystrokes on infected machines, as well as steal documents and take screenshots. It also catalogued any devices or systems that were connected to the machines so the attackers could build a blueprint of the company’s network architecture. The malware didn’t immediately siphon the stolen data from infected machines but instead stored it in a temporary file, like the one the triage team had found. The file grew fatter each time the infostealer sucked up data, until at some point the attackers would reach out to the machine to retrieve it from a server in India that served as a command-and-control node for the malware.
  • ...1 more annotation...
  • Bencsáth took the mirror images and the company’s system logs with him, after they had been scrubbed of any sensitive customer data, and over the next few days scoured them for more malicious files, all the while being coy to his colleagues back at the lab about what he was doing. The triage team worked in parallel, and after several more days they had uncovered three additional suspicious files. When Bencsáth examined one of them—a kernel-mode driver, a program that helps the computer communicate with devices such as printers—his heart quickened. It was signed with a valid digital certificate from a company in Taiwan (digital certificates are documents ensuring that a piece of software is legitimate). Wait a minute, he thought. Stuxnet—the cyberweapon that was unleashed on Iran’s uranium-enrichment program—also used a driver that was signed with a certificate from a company in Taiwan. That one came from RealTek Semiconductor, but this certificate belonged to a different company, C-Media Electronics. The driver had been signed with the certificate in August 2009, around the same time Stuxnet had been unleashed on machines in Iran.
Gonzalo San Gil, PhD.

The-Speculative-Invoicing-Handbook.pdf - 0 views

  •  
    "Stage One: Put The Kettle On So you've received a letter, you feel intruded upon and threatened. You're wondering if you even did w hat you've been accused of - well, at least, what your connection , has been accused of... You're not the first and you're unlikely to be the last to get one of these 'nastygrams'. The first step to managing the situation you've been put in is to tackle it calmly. You have been invited to play a game. This particular game requires careful thought and rational, planned actions. It is not best played while emotions are running high; never do anything in haste. You're reading this handbook so you've clearly used your head so far and are on the right track. If you've not already done so, make yourself a cuppa and sit down to read the rest of this. Relax... you're among friends now. Welcome to the team."
Paul Merrell

The Latest Rules on How Long NSA Can Keep Americans' Encrypted Data Look Too Familiar |... - 0 views

  • Does the National Security Agency (NSA) have the authority to collect and keep all encrypted Internet traffic for as long as is necessary to decrypt that traffic? That was a question first raised in June 2013, after the minimization procedures governing telephone and Internet records collected under Section 702 of the Foreign Intelligence Surveillance Act were disclosed by Edward Snowden. The issue quickly receded into the background, however, as the world struggled to keep up with the deluge of surveillance disclosures. The Intelligence Authorization Act of 2015, which passed Congress this last December, should bring the question back to the fore. It established retention guidelines for communications collected under Executive Order 12333 and included an exception that allows NSA to keep ‘incidentally’ collected encrypted communications for an indefinite period of time. This creates a massive loophole in the guidelines. NSA’s retention of encrypted communications deserves further consideration today, now that these retention guidelines have been written into law. It has become increasingly clear over the last year that surveillance reform will be driven by technological change—specifically by the growing use of encryption technologies. Therefore, any legislation touching on encryption should receive close scrutiny.
  • Section 309 of the intel authorization bill describes “procedures for the retention of incidentally acquired communications.” It establishes retention guidelines for surveillance programs that are “reasonably anticipated to result in the acquisition of [telephone or electronic communications] to or from a United States person.” Communications to or from a United States person are ‘incidentally’ collected because the U.S. person is not the actual target of the collection. Section 309 states that these incidentally collected communications must be deleted after five years unless they meet a number of exceptions. One of these exceptions is that “the communication is enciphered or reasonably believed to have a secret meaning.” This exception appears to be directly lifted from NSA’s minimization procedures for data collected under Section 702 of FISA, which were declassified in 2013. 
  • While Section 309 specifically applies to collection taking place under E.O. 12333, not FISA, several of the exceptions described in Section 309 closely match exceptions in the FISA minimization procedures. That includes the exception for “enciphered” communications. Those minimization procedures almost certainly served as a model for these retention guidelines and will likely shape how this new language is interpreted by the Executive Branch. Section 309 also asks the heads of each relevant member of the intelligence community to develop procedures to ensure compliance with new retention requirements. I expect those procedures to look a lot like the FISA minimization guidelines.
  • ...6 more annotations...
  • This language is broad, circular, and technically incoherent, so it takes some effort to parse appropriately. When the minimization procedures were disclosed in 2013, this language was interpreted by outside commentators to mean that NSA may keep all encrypted data that has been incidentally collected under Section 702 for at least as long as is necessary to decrypt that data. Is this the correct interpretation? I think so. It is important to realize that the language above isn’t just broad. It seems purposefully broad. The part regarding relevance seems to mirror the rationale NSA has used to justify its bulk phone records collection program. Under that program, all phone records were relevant because some of those records could be valuable to terrorism investigations and (allegedly) it isn’t possible to collect only those valuable records. This is the “to find a needle a haystack, you first have to have the haystack” argument. The same argument could be applied to encrypted data and might be at play here.
  • This exception doesn’t just apply to encrypted data that might be relevant to a current foreign intelligence investigation. It also applies to cases in which the encrypted data is likely to become relevant to a future intelligence requirement. This is some remarkably generous language. It seems one could justify keeping any type of encrypted data under this exception. Upon close reading, it is difficult to avoid the conclusion that these procedures were written carefully to allow NSA to collect and keep a broad category of encrypted data under the rationale that this data might contain the communications of NSA targets and that it might be decrypted in the future. If NSA isn’t doing this today, then whoever wrote these minimization procedures wanted to at least ensure that NSA has the authority to do this tomorrow.
  • There are a few additional observations that are worth making regarding these nominally new retention guidelines and Section 702 collection. First, the concept of incidental collection as it has typically been used makes very little sense when applied to encrypted data. The way that NSA’s Section 702 upstream “about” collection is understood to work is that technology installed on the network does some sort of pattern match on Internet traffic; say that an NSA target uses example@gmail.com to communicate. NSA would then search content of emails for references to example@gmail.com. This could notionally result in a lot of incidental collection of U.S. persons’ communications whenever the email that references example@gmail.com is somehow mixed together with emails that have nothing to do with the target. This type of incidental collection isn’t possible when the data is encrypted because it won’t be possible to search and find example@gmail.com in the body of an email. Instead, example@gmail.com will have been turned into some alternative, indecipherable string of bits on the network. Incidental collection shouldn’t occur because the pattern match can’t occur in the first place. This demonstrates that, when communications are encrypted, it will be much harder for NSA to search Internet traffic for a unique ID associated with a specific target.
  • This lends further credence to the conclusion above: rather than doing targeted collection against specific individuals, NSA is collecting, or plans to collect, a broad class of data that is encrypted. For example, NSA might collect all PGP encrypted emails or all Tor traffic. In those cases, NSA could search Internet traffic for patterns associated with specific types of communications, rather than specific individuals’ communications. This would technically meet the definition of incidental collection because such activity would result in the collection of communications of U.S. persons who aren’t the actual targets of surveillance. Collection of all Tor traffic would entail a lot of this “incidental” collection because the communications of NSA targets would be mixed with the communications of a large number of non-target U.S. persons. However, this “incidental” collection is inconsistent with how the term is typically used, which is to refer to over-collection resulting from targeted surveillance programs. If NSA were collecting all Tor traffic, that activity wouldn’t actually be targeted, and so any resulting over-collection wouldn’t actually be incidental. Moreover, greater use of encryption by the general public would result in an ever-growing amount of this type of incidental collection.
  • This type of collection would also be inconsistent with representations of Section 702 upstream collection that have been made to the public and to Congress. Intelligence officials have repeatedly suggested that search terms used as part of this program have a high degree of specificity. They have also argued that the program is an example of targeted rather than bulk collection. ODNI General Counsel Robert Litt, in a March 2014 meeting before the Privacy and Civil Liberties Oversight Board, stated that “there is either a misconception or a mischaracterization commonly repeated that Section 702 is a form of bulk collection. It is not bulk collection. It is targeted collection based on selectors such as telephone numbers or email addresses where there’s reason to believe that the selector is relevant to a foreign intelligence purpose.” The collection of Internet traffic based on patterns associated with types of communications would be bulk collection; more akin to NSA’s collection of phone records en mass than it is to targeted collection focused on specific individuals. Moreover, this type of collection would certainly fall within the definition of bulk collection provided just last week by the National Academy of Sciences: “collection in which a significant portion of the retained data pertains to identifiers that are not targets at the time of collection.”
  • The Section 702 minimization procedures, which will serve as a template for any new retention guidelines established for E.O. 12333 collection, create a large loophole for encrypted communications. With everything from email to Internet browsing to real-time communications moving to encrypted formats, an ever-growing amount of Internet traffic will fall within this loophole.
  •  
    Tucked into a budget authorization act in December without press notice. Section 309 (the Act is linked from the article) appears to be very broad authority for the NSA to intercept any form of telephone or other electronic information in bulk. There are far more exceptions from the five-year retention limitation than the encrypted information exception. When reading this, keep in mind that the U.S. intelligence community plays semantic games to obfuscate what it does. One of its word plays is that communications are not "collected" until an analyst looks at or listens to partiuclar data, even though the data will be searched to find information countless times before it becomes "collected." That searching was the major basis for a decision by the U.S. District Court in Washington, D.C. that bulk collection of telephone communications was unconstitutional: Under the Fourth Amendment, a "search" or "seizure" requiring a judicial warrant occurs no later than when the information is intercepted. That case is on appeal, has been briefed and argued, and a decision could come any time now. Similar cases are pending in two other courts of appeals. Also, an important definition from the new Intelligence Authorization Act: "(a) DEFINITIONS.-In this section: (1) COVERED COMMUNICATION.-The term ''covered communication'' means any nonpublic telephone or electronic communication acquired without the consent of a person who is a party to the communication, including communications in electronic storage."       
Gary Edwards

Skynet rising: Google acquires 512-qubit quantum computer; NSA surveillance to be turne... - 0 views

  •  
    "The ultimate code breakers" If you know anything about encryption, you probably also realize that quantum computers are the secret KEY to unlocking all encrypted files. As I wrote about last year here on Natural News, once quantum computers go into widespread use by the NSA, the CIA, Google, etc., there will be no more secrets kept from the government. All your files - even encrypted files - will be easily opened and read. Until now, most people believed this day was far away. Quantum computing is an "impractical pipe dream," we've been told by scowling scientists and "flat Earth" computer engineers. "It's not possible to build a 512-qubit quantum computer that actually works," they insisted. Don't tell that to Eric Ladizinsky, co-founder and chief scientist of a company called D-Wave. Because Ladizinsky's team has already built a 512-qubit quantum computer. And they're already selling them to wealthy corporations, too. DARPA, Northrup Grumman and Goldman Sachs In case you're wondering where Ladizinsky came from, he's a former employee of Northrup Grumman Space Technology (yes, a weapons manufacturer) where he ran a multi-million-dollar quantum computing research project for none other than DARPA - the same group working on AI-driven armed assault vehicles and battlefield robots to replace human soldiers. .... When groundbreaking new technology is developed by smart people, it almost immediately gets turned into a weapon. Quantum computing will be no different. This technology grants God-like powers to police state governments that seek to dominate and oppress the People.  ..... Google acquires "Skynet" quantum computers from D-Wave According to an article published in Scientific American, Google and NASA have now teamed up to purchase a 512-qubit quantum computer from D-Wave. The computer is called "D-Wave Two" because it's the second generation of the system. The first system was a 128-qubit computer. Gen two
  •  
    Normally, I'd be suspicious of anything published by Infowars because its editors are willing to publish really over the top stuff, but: [i] this is subject matter I've maintained an interest in over the years and I was aware that working quantum computers were imminent; and [ii] the pedigree on this particular information does not trace to Scientific American, as stated in the article. I've known Scientific American to publish at least one soothing and lengthy article on the subject of chlorinated dioxin hazard -- my specialty as a lawyer was litigating against chemical companies that generated dioxin pollution -- that was generated by known closet chemical industry advocates long since discredited and was totally lacking in scientific validity and contrary to established scientific knowledge. So publication in Scientific American doesn't pack a lot of weight with me. But checking the Scientific American linked article, notes that it was reprinted by permission from Nature, a peer-reviewed scientific journal and news organization that I trust much more. That said, the InfoWars version is a rewrite that contains lots of information not in the Nature/Scientific American version of a sensationalist nature, so heightened caution is still in order. Check the reprinted Nature version before getting too excited: "The D-Wave computer is not a 'universal' computer that can be programmed to tackle any kind of problem. But scientists have found they can usefully frame questions in machine-learning research as optimisation problems. "D-Wave has battled to prove that its computer really operates on a quantum level, and that it is better or faster than a conventional computer. Before striking the latest deal, the prospective customers set a series of tests for the quantum computer. D-Wave hired an outside expert in algorithm-racing, who concluded that the speed of the D-Wave Two was above average overall, and that it was 3,600 times faster than a leading conventional comput
Paul Merrell

Exclusive: Inside America's Plan to Kill Online Privacy Rights Everywhere | The Cable - 0 views

  • The United States and its key intelligence allies are quietly working behind the scenes to kneecap a mounting movement in the United Nations to promote a universal human right to online privacy, according to diplomatic sources and an internal American government document obtained by The Cable. The diplomatic battle is playing out in an obscure U.N. General Assembly committee that is considering a proposal by Brazil and Germany to place constraints on unchecked internet surveillance by the National Security Agency and other foreign intelligence services. American representatives have made it clear that they won't tolerate such checks on their global surveillance network. The stakes are high, particularly in Washington -- which is seeking to contain an international backlash against NSA spying -- and in Brasilia, where Brazilian President Dilma Roussef is personally involved in monitoring the U.N. negotiations.
  • The Brazilian and German initiative seeks to apply the right to privacy, which is enshrined in the International Covenant on Civil and Political Rights (ICCPR), to online communications. Their proposal, first revealed by The Cable, affirms a "right to privacy that is not to be subjected to arbitrary or unlawful interference with their privacy, family, home, or correspondence." It notes that while public safety may "justify the gathering and protection of certain sensitive information," nations "must ensure full compliance" with international human rights laws. A final version the text is scheduled to be presented to U.N. members on Wednesday evening and the resolution is expected to be adopted next week. A draft of the resolution, which was obtained by The Cable, calls on states to "to respect and protect the right to privacy," asserting that the "same rights that people have offline must also be protected online, including the right to privacy." It also requests the U.N. high commissioner for human rights, Navi Pillay, present the U.N. General Assembly next year with a report on the protection and promotion of the right to privacy, a provision that will ensure the issue remains on the front burner.
  • Publicly, U.S. representatives say they're open to an affirmation of privacy rights. "The United States takes very seriously our international legal obligations, including those under the International Covenant on Civil and Political Rights," Kurtis Cooper, a spokesman for the U.S. mission to the United Nations, said in an email. "We have been actively and constructively negotiating to ensure that the resolution promotes human rights and is consistent with those obligations." But privately, American diplomats are pushing hard to kill a provision of the Brazilian and German draft which states that "extraterritorial surveillance" and mass interception of communications, personal information, and metadata may constitute a violation of human rights. The United States and its allies, according to diplomats, outside observers, and documents, contend that the Covenant on Civil and Political Rights does not apply to foreign espionage.
  • ...6 more annotations...
  • n recent days, the United States circulated to its allies a confidential paper highlighting American objectives in the negotiations, "Right to Privacy in the Digital Age -- U.S. Redlines." It calls for changing the Brazilian and German text so "that references to privacy rights are referring explicitly to States' obligations under ICCPR and remove suggestion that such obligations apply extraterritorially." In other words: America wants to make sure it preserves the right to spy overseas. The U.S. paper also calls on governments to promote amendments that would weaken Brazil's and Germany's contention that some "highly intrusive" acts of online espionage may constitute a violation of freedom of expression. Instead, the United States wants to limit the focus to illegal surveillance -- which the American government claims it never, ever does. Collecting information on tens of millions of people around the world is perfectly acceptable, the Obama administration has repeatedly said. It's authorized by U.S. statute, overseen by Congress, and approved by American courts.
  • "Recall that the USG's [U.S. government's] collection activities that have been disclosed are lawful collections done in a manner protective of privacy rights," the paper states. "So a paragraph expressing concern about illegal surveillance is one with which we would agree." The privacy resolution, like most General Assembly decisions, is neither legally binding nor enforceable by any international court. But international lawyers say it is important because it creates the basis for an international consensus -- referred to as "soft law" -- that over time will make it harder and harder for the United States to argue that its mass collection of foreigners' data is lawful and in conformity with human rights norms. "They want to be able to say ‘we haven't broken the law, we're not breaking the law, and we won't break the law,'" said Dinah PoKempner, the general counsel for Human Rights Watch, who has been tracking the negotiations. The United States, she added, wants to be able to maintain that "we have the freedom to scoop up anything we want through the massive surveillance of foreigners because we have no legal obligations."
  • The United States negotiators have been pressing their case behind the scenes, raising concerns that the assertion of extraterritorial human rights could constrain America's effort to go after international terrorists. But Washington has remained relatively muted about their concerns in the U.N. negotiating sessions. According to one diplomat, "the United States has been very much in the backseat," leaving it to its allies, Australia, Britain, and Canada, to take the lead. There is no extraterritorial obligation on states "to comply with human rights," explained one diplomat who supports the U.S. position. "The obligation is on states to uphold the human rights of citizens within their territory and areas of their jurisdictions."
  • The position, according to Jamil Dakwar, the director of the American Civil Liberties Union's Human Rights Program, has little international backing. The International Court of Justice, the U.N. Human Rights Committee, and the European Court have all asserted that states do have an obligation to comply with human rights laws beyond their own borders, he noted. "Governments do have obligation beyond their territories," said Dakwar, particularly in situations, like the Guantanamo Bay detention center, where the United States exercises "effective control" over the lives of the detainees. Both PoKempner and Dakwar suggested that courts may also judge that the U.S. dominance of the Internet places special legal obligations on it to ensure the protection of users' human rights.
  • "It's clear that when the United States is conducting surveillance, these decisions and operations start in the United States, the servers are at NSA headquarters, and the capabilities are mainly in the United States," he said. "To argue that they have no human rights obligations overseas is dangerous because it sends a message that there is void in terms of human rights protection outside countries territory. It's going back to the idea that you can create a legal black hole where there is no applicable law." There were signs emerging on Wednesday that America may have been making ground in pressing the Brazilians and Germans to back on one of its toughest provisions. In an effort to address the concerns of the U.S. and its allies, Brazil and Germany agreed to soften the language suggesting that mass surveillance may constitute a violation of human rights. Instead, it simply deep "concern at the negative impact" that extraterritorial surveillance "may have on the exercise of and enjoyment of human rights." The U.S., however, has not yet indicated it would support the revised proposal.
  • The concession "is regrettable. But it’s not the end of the battle by any means," said Human Rights Watch’s PoKempner. She added that there will soon be another opportunity to corral America's spies: a U.N. discussion on possible human rights violations as a result of extraterritorial surveillance will soon be taken up by the U.N. High commissioner.
  •  
    Woo-hoo! Go get'em, U.N.
Gonzalo San Gil, PhD.

The "Internet Governance" Farce and its "Multi-stakeholder" Illusion | La Quadrature du... - 0 views

  •  
    by Jérémie Zimmermann For almost 15 years, "Internet Governance" meetings1 have been drawing attention and driving our imaginaries towards believing that consensual rules for the Internet could emerge from global "multi-stakeholder" discussions. A few days ahead of the "NETmundial" Forum in Sao Paulo it has become obvious that "Internet Governance" is a farcical way of keeping us busy and hiding a sad reality: Nothing concrete in these 15 years, not a single action, ever emerged from "multi-stakeholder" meetings, while at the same time, technology as a whole has been turned against its users, as a tool for surveillance, control and oppression.
  •  
    by Jérémie Zimmermann For almost 15 years, "Internet Governance" meetings1 have been drawing attention and driving our imaginaries towards believing that consensual rules for the Internet could emerge from global "multi-stakeholder" discussions. A few days ahead of the "NETmundial" Forum in Sao Paulo it has become obvious that "Internet Governance" is a farcical way of keeping us busy and hiding a sad reality: Nothing concrete in these 15 years, not a single action, ever emerged from "multi-stakeholder" meetings, while at the same time, technology as a whole has been turned against its users, as a tool for surveillance, control and oppression.
Gonzalo San Gil, PhD.

Free Software Foundation statement on the GNU Bash "shellshock" vulnerability - Free So... - 0 views

  •  
    "by Free Software Foundation - Published on Sep 25, 2014 04:51 PM A major security vulnerability has been discovered in the free software shell GNU Bash. The most serious issues have already been fixed, and a complete fix is well underway. GNU/Linux distributions are working quickly to release updated packages for their users. All Bash users should upgrade immediately, and audit the list of remote network services running on their systems. " [# ! + http://security.stackexchange.com/questions/68168/is-there-a-short-command-to-test-if-my-server-is-secure-against-the-shellshock-b]
  •  
    "by Free Software Foundation - Published on Sep 25, 2014 04:51 PM A major security vulnerability has been discovered in the free software shell GNU Bash. The most serious issues have already been fixed, and a complete fix is well underway. GNU/Linux distributions are working quickly to release updated packages for their users. All Bash users should upgrade immediately, and audit the list of remote network services running on their systems. "
Gonzalo San Gil, PhD.

Learn GNU/Linux the Fun Way | Linux Journal - 0 views

  •  
    "Oct 02, 2014 By Doc Searls in Education Sometimes a gift just falls in your lap. This month, it came in the form of an e-mail out of the blue from Jared Nielsen, one of two brothers (the other is J.R. Nielsen) who created The Hello World Program, "an educational web series making computer science fun and accessible to all". If it had been just that, I might not have been interested. "
  •  
    "Oct 02, 2014 By Doc Searls in Education Sometimes a gift just falls in your lap. This month, it came in the form of an e-mail out of the blue from Jared Nielsen, one of two brothers (the other is J.R. Nielsen) who created The Hello World Program, "an educational web series making computer science fun and accessible to all". If it had been just that, I might not have been interested. "
Gonzalo San Gil, PhD.

Top 3 Online Resources For Learning The Command Line - 1 views

  •  
    "If you've been using Linux or OS X for a couple of years, you must have run into the command line (or the terminal) at some point. It could have been a command to fix a problem or enable a feature. These days it's easy to just ignore the command line. And for most users, GUI is really enough."
  •  
    "If you've been using Linux or OS X for a couple of years, you must have run into the command line (or the terminal) at some point. It could have been a command to fix a problem or enable a feature. These days it's easy to just ignore the command line. And for most users, GUI is really enough."
Gary Edwards

SVG Is The Future Of Application Development | SitePoint » - 0 views

  •  
    I could see this coming a mile away, ana it's about time! ".... So if HTML can't deliver for us here, what will? Microsoft wants us to use Silverlight and Adobe wants us to use Flash and AIR, of course. And Apple…? Apple ostensibly wants us to use HTML5's canvas. Both Microsoft's and Adobe's contenders are proprietary, which seems to be reason enough for web developers to avoid them to a certain degree, and all of them muddy HTML, which is a dangerous thing for the semantic web. But Apple actually has a trick up its sleeve. Like Mozilla's been doing with Firefox, Apple has quietly been implementing better support for SVG, the W3C's Recommendation for XML-based vector graphics, into WebKit. SVG delivers the same kind of vector graphics capabilities that Flash does, but it does so using all the interoperability benefits that XML brings along for the ride. SVG is great for graphically displaying both text and images, manipulating them with declarative visual primitives, and it comes with a host of lickable effects. Ironically, SVG was originally jointly developed by both Adobe and Sun Microsystems but recently it's Sun Labs that has been doing interesting stuff with the technology. The most compelling experiment of this kind has to be Sun Labs's Lively Kernel project....."
Paul Merrell

For sale: Systems that can secretly track where cellphone users go around the globe - T... - 0 views

  • Makers of surveillance systems are offering governments across the world the ability to track the movements of almost anybody who carries a cellphone, whether they are blocks away or on another continent. The technology works by exploiting an essential fact of all cellular networks: They must keep detailed, up-to-the-minute records on the locations of their customers to deliver calls and other services to them. Surveillance systems are secretly collecting these records to map people’s travels over days, weeks or longer, according to company marketing documents and experts in surveillance technology.
  • The world’s most powerful intelligence services, such as the National Security Agency and Britain’s GCHQ, long have used cellphone data to track targets around the globe. But experts say these new systems allow less technically advanced governments to track people in any nation — including the United States — with relative ease and precision.
  • It is unclear which governments have acquired these tracking systems, but one industry official, speaking on the condition of anonymity to share sensitive trade information, said that dozens of countries have bought or leased such technology in recent years. This rapid spread underscores how the burgeoning, multibillion-dollar surveillance industry makes advanced spying technology available worldwide. “Any tin-pot dictator with enough money to buy the system could spy on people anywhere in the world,” said Eric King, deputy director of Privacy International, a London-based activist group that warns about the abuse of surveillance technology. “This is a huge problem.”
  • ...9 more annotations...
  • Security experts say hackers, sophisticated criminal gangs and nations under sanctions also could use this tracking technology, which operates in a legal gray area. It is illegal in many countries to track people without their consent or a court order, but there is no clear international legal standard for secretly tracking people in other countries, nor is there a global entity with the authority to police potential abuses.
  • tracking systems that access carrier location databases are unusual in their ability to allow virtually any government to track people across borders, with any type of cellular phone, across a wide range of carriers — without the carriers even knowing. These systems also can be used in tandem with other technologies that, when the general location of a person is already known, can intercept calls and Internet traffic, activate microphones, and access contact lists, photos and other documents. Companies that make and sell surveillance technology seek to limit public information about their systems’ capabilities and client lists, typically marketing their technology directly to law enforcement and intelligence services through international conferences that are closed to journalists and other members of the public.
  • Yet marketing documents obtained by The Washington Post show that companies are offering powerful systems that are designed to evade detection while plotting movements of surveillance targets on computerized maps. The documents claim system success rates of more than 70 percent. A 24-page marketing brochure for SkyLock, a cellular tracking system sold by Verint, a maker of analytics systems based in Melville, N.Y., carries the subtitle “Locate. Track. Manipulate.” The document, dated January 2013 and labeled “Commercially Confidential,” says the system offers government agencies “a cost-effective, new approach to obtaining global location information concerning known targets.”
  • (Privacy International has collected several marketing brochures on cellular surveillance systems, including one that refers briefly to SkyLock, and posted them on its Web site. The 24-page SkyLock brochure and other material was independently provided to The Post by people concerned that such systems are being abused.)
  • Verint, which also has substantial operations in Israel, declined to comment for this story. It says in the marketing brochure that it does not use SkyLock against U.S. or Israeli phones, which could violate national laws. But several similar systems, marketed in recent years by companies based in Switzerland, Ukraine and elsewhere, likely are free of such limitations.
  • The tracking technology takes advantage of the lax security of SS7, a global network that cellular carriers use to communicate with one another when directing calls, texts and Internet data. The system was built decades ago, when only a few large carriers controlled the bulk of global phone traffic. Now thousands of companies use SS7 to provide services to billions of phones and other mobile devices, security experts say. All of these companies have access to the network and can send queries to other companies on the SS7 system, making the entire network more vulnerable to exploitation. Any one of these companies could share its access with others, including makers of surveillance systems.
  • Companies that market SS7 tracking systems recommend using them in tandem with “IMSI catchers,” increasingly common surveillance devices that use cellular signals collected directly from the air to intercept calls and Internet traffic, send fake texts, install spyware on a phone, and determine precise locations. IMSI catchers — also known by one popular trade name, StingRay — can home in on somebody a mile or two away but are useless if a target’s general location is not known. SS7 tracking systems solve that problem by locating the general area of a target so that IMSI catchers can be deployed effectively. (The term “IMSI” refers to a unique identifying code on a cellular phone.)
  • Verint can install SkyLock on the networks of cellular carriers if they are cooperative — something that telecommunications experts say is common in countries where carriers have close relationships with their national governments. Verint also has its own “worldwide SS7 hubs” that “are spread in various locations around the world,” says the brochure. It does not list prices for the services, though it says that Verint charges more for the ability to track targets in many far-flung countries, as opposed to only a few nearby ones. Among the most appealing features of the system, the brochure says, is its ability to sidestep the cellular operators that sometimes protect their users’ personal information by refusing government requests or insisting on formal court orders before releasing information.
  • Another company, Defentek, markets a similar system called Infiltrator Global Real-Time Tracking System on its Web site, claiming to “locate and track any phone number in the world.” The site adds: “It is a strategic solution that infiltrates and is undetected and unknown by the network, carrier, or the target.”
  •  
    The Verint company has very close ties to the Iraeli government. Its former parent company Comverse, was heavily subsidized by Israel and the bulk of its manufacturing and code development was done in Israel. See https://en.wikipedia.org/wiki/Comverse_Technology "In December 2001, a Fox News report raised the concern that wiretapping equipment provided by Comverse Infosys to the U.S. government for electronic eavesdropping may have been vulnerable, as these systems allegedly had a back door through which the wiretaps could be intercepted by unauthorized parties.[55] Fox News reporter Carl Cameron said there was no reason to believe the Israeli government was implicated, but that "a classified top-secret investigation is underway".[55] A March 2002 story by Le Monde recapped the Fox report and concluded: "Comverse is suspected of having introduced into its systems of the 'catch gates' in order to 'intercept, record and store' these wire-taps. This hardware would render the 'listener' himself 'listened to'."[56] Fox News did not pursue the allegations, and in the years since, there have been no legal or commercial actions of any type taken against Comverse by the FBI or any other branch of the US Government related to data access and security issues. While no real evidence has been presented against Comverse or Verint, the allegations have become a favorite topic of conspiracy theorists.[57] By 2005, the company had $959 million in sales and employed over 5,000 people, of whom about half were located in Israel.[16]" Verint is also the company that got the Dept. of Homeland Security contract to provide and install an electronic and video surveillance system across the entire U.S. border with Mexico.  One need not be much of a conspiracy theorist to have concerns about Verint's likely interactions and data sharing with the NSA and its Israeli equivalent, Unit 8200. 
Paul Merrell

NZ Prime Minister John Key Retracts Vow to Resign if Mass Surveillance Is Shown - 0 views

  • In August 2013, as evidence emerged of the active participation by New Zealand in the “Five Eyes” mass surveillance program exposed by Edward Snowden, the country’s conservative Prime Minister, John Key, vehemently denied that his government engages in such spying. He went beyond mere denials, expressly vowing to resign if it were ever proven that his government engages in mass surveillance of New Zealanders. He issued that denial, and the accompanying resignation vow, in order to reassure the country over fears provoked by a new bill he advocated to increase the surveillance powers of that country’s spying agency, Government Communications Security Bureau (GCSB) — a bill that passed by one vote thanks to the Prime Minister’s guarantees that the new law would not permit mass surveillance.
  • Since then, a mountain of evidence has been presented that indisputably proves that New Zealand does exactly that which Prime Minister Key vehemently denied — exactly that which he said he would resign if it were proven was done. Last September, we reported on a secret program of mass surveillance at least partially implemented by the Key government that was designed to exploit the very law that Key was publicly insisting did not permit mass surveillance. At the time, Snowden, citing that report as well as his own personal knowledge of GCSB’s participation in the mass surveillance tool XKEYSCORE, wrote in an article for The Intercept: Let me be clear: any statement that mass surveillance is not performed in New Zealand, or that the internet communications are not comprehensively intercepted and monitored, or that this is not intentionally and actively abetted by the GCSB, is categorically false. . . . The prime minister’s claim to the public, that “there is no and there never has been any mass surveillance” is false. The GCSB, whose operations he is responsible for, is directly involved in the untargeted, bulk interception and algorithmic analysis of private communications sent via internet, satellite, radio, and phone networks.
  • A series of new reports last week by New Zealand journalist Nicky Hager, working with my Intercept colleague Ryan Gallagher, has added substantial proof demonstrating GCSB’s widespread use of mass surveillance. An article last week in The New Zealand Herald demonstrated that “New Zealand’s electronic surveillance agency, the GCSB, has dramatically expanded its spying operations during the years of John Key’s National Government and is automatically funnelling vast amounts of intelligence to the US National Security Agency.” Specifically, its “intelligence base at Waihopai has moved to ‘full-take collection,’ indiscriminately intercepting Asia-Pacific communications and providing them en masse to the NSA through the controversial NSA intelligence system XKeyscore, which is used to monitor emails and internet browsing habits.” Moreover, the documents “reveal that most of the targets are not security threats to New Zealand, as has been suggested by the Government,” but “instead, the GCSB directs its spying against a surprising array of New Zealand’s friends, trading partners and close Pacific neighbours.” A second report late last week published jointly by Hager and The Intercept detailed the role played by GCSB’s Waihopai base in aiding NSA’s mass surveillance activities in the Pacific (as Hager was working with The Intercept on these stories, his house was raided by New Zealand police for 10 hours, ostensibly to find Hager’s source for a story he published that was politically damaging to Key).
  • ...6 more annotations...
  • That the New Zealand government engages in precisely the mass surveillance activities Key vehemently denied is now barely in dispute. Indeed, a former director of GCSB under Key, Sir Bruce Ferguson, while denying any abuse of New Zealander’s communications, now admits that the agency engages in mass surveillance.
  • Meanwhile, Russel Norman, the head of the country’s Green Party, said in response to these stories that New Zealand is “committing crimes” against its neighbors in the Pacific by subjecting them to mass surveillance, and insists that the Key government broke the law because that dragnet necessarily includes the communications of New Zealand citizens when they travel in the region.
  • So now that it’s proven that New Zealand does exactly that which Prime Minister Key vowed would cause him to resign if it were proven, is he preparing his resignation speech? No: that’s something a political official with a minimal amount of integrity would do. Instead — even as he now refuses to say what he has repeatedly said before: that GCSB does not engage in mass surveillance — he’s simply retracting his pledge as though it were a minor irritant, something to be casually tossed aside:
  • When asked late last week whether New Zealanders have a right to know what their government is doing in the realm of digital surveillance, the Prime Minister said: “as a general rule, no.” And he expressly refuses to say whether New Zealand is doing that which he swore repeatedly it was not doing, as this excellent interview from Radio New Zealand sets forth: Interviewer: “Nicky Hager’s revelations late last week . . . have stoked fears that New Zealanders’ communications are being indiscriminately caught in that net. . . . The Prime Minister, John Key, has in the past promised to resign if it were found to be mass surveillance of New Zealanders . . . Earlier, Mr. Key was unable to give me an assurance that mass collection of communications from New Zealanders in the Pacific was not taking place.” PM Key: “No, I can’t. I read the transcript [of former GCSB Director Bruce Ferguson’s interview] – I didn’t hear the interview – but I read the transcript, and you know, look, there’s a variety of interpretations – I’m not going to critique–”
  • Interviewer: “OK, I’m not asking for a critique. Let’s listen to what Bruce Ferguson did tell us on Friday:” Ferguson: “The whole method of surveillance these days, is sort of a mass collection situation – individualized: that is mission impossible.” Interviewer: “And he repeated that several times, using the analogy of a net which scoops up all the information. . . . I’m not asking for a critique with respect to him. Can you confirm whether he is right or wrong?” Key: “Uh, well I’m not going to go and critique the guy. And I’m not going to give a view of whether he’s right or wrong” . . . . Interviewer: “So is there mass collection of personal data of New Zealand citizens in the Pacific or not?” Key: “I’m just not going to comment on where we have particular targets, except to say that where we go and collect particular information, there is always a good reason for that.”
  • From “I will resign if it’s shown we engage in mass surveillance of New Zealanders” to “I won’t say if we’re doing it” and “I won’t quit either way despite my prior pledges.” Listen to the whole interview: both to see the type of adversarial questioning to which U.S. political leaders are so rarely subjected, but also to see just how obfuscating Key’s answers are. The history of reporting from the Snowden archive has been one of serial dishonesty from numerous governments: such as the way European officials at first pretended to be outraged victims of NSA only for it to be revealed that, in many ways, they are active collaborators in the very system they were denouncing. But, outside of the U.S. and U.K. itself, the Key government has easily been the most dishonest over the last 20 months: one of the most shocking stories I’ve seen during this time was how the Prime Minister simultaneously plotted in secret to exploit the 2013 proposed law to implement mass surveillance at exactly the same time that he persuaded the public to support it by explicitly insisting that it would not allow mass surveillance. But overtly reneging on a public pledge to resign is a new level of political scandal. Key was just re-elected for his third term, and like any political official who stays in power too long, he has the despot’s mentality that he’s beyond all ethical norms and constraints. But by the admission of his own former GCSB chief, he has now been caught red-handed doing exactly that which he swore to the public would cause him to resign if it were proven. If nothing else, the New Zealand media ought to treat that public deception from its highest political official with the level of seriousness it deserves.
  •  
    It seems the U.S. is not the only nation that has liars for head of state. 
Gary Edwards

Google's ARC Beta runs Android apps on Chrome OS, Windows, Mac, and Linux | Ars Technica - 0 views

  • So calling all developers: You can now (probably, maybe) run your Android apps on just about anything—Android, Chrome OS, Windows, Mac, and Linux—provided you fiddle with the ARC Welder and submit your app to the Chrome Web Store.
  • The App Runtime for Chrome and Native Client are hugely important projects because they potentially allow Google to push a "universal binary" strategy on developers. "Write your app for Android, and we'll make it run on almost every popular OS! (other than iOS)" Google Play Services support is a major improvement for ARC and signals just how ambitious this project is. Some day it will be a great sales pitch to convince developers to write for Android first, which gives them apps on all these desktop OSes for free.
  •  
    Thanks Marbux. ARC appears to be an extraordinary technology. Funny but Florian has been pushing Native Client (NaCL) since it was first ported from Firefox to Chrome. Looks like he was right. "In September, Google launched ARC-the "App Runtime for Chrome,"-a project that allowed Android apps to run on Chrome OS. A few days later, a hack revealed the project's full potential: it enabled ARC on every "desktop" version of Chrome, meaning you could unofficially run Android apps on Chrome OS, Windows, Mac OS X, and Linux. ARC made Android apps run on nearly every computing platform (save iOS). ARC is an early beta though so Google has kept the project's reach very limited-only a handful of apps have been ported to ARC, which have all been the result of close collaborations between Google and the app developer. Now though, Google is taking two big steps forward with the latest developer preview: it's allowing any developer to run their app on ARC via a new Chrome app packager, and it's allowing ARC to run on any desktop OS with a Chrome browser. ARC runs Windows, Mac, Linux, and Chrome OS thanks to Native Client (abbreviated "NaCL"). NaCL is a Chrome sandboxing technology that allows Chrome apps and plugins to run at "near native" speeds, taking full advantage of the system's CPU and GPU. Native Client turns Chrome into a development platform, write to it, and it'll run on all desktop Chrome browsers. Google ported a full Android stack to Native Client, allowing Android apps to run on most major OSes. With the original ARC release, there was no official process to getting an Android app running on the Chrome platform (other than working with Google). Now Google has released the adorably-named ARC Welder, a Chrome app which will convert any Android app into an ARC-powered Chrome app. It's mainly for developers to package up an APK and submit it to the Chrome Web Store, but anyone can package and launch an APK from the app directly."
Gonzalo San Gil, PhD.

Peter Sunde: The 'Pirate Movement' is Dead | TorrentFreak - 0 views

  •  
    " Peter Sunde on April 4, 2015 C: 0 Opinion Ever since someone had the idea of starting a "pirate party," there've been discussions about the necessity for such a party. In the trail of that discussion, there's always been the one about whether the "pirate movement" is alive or not."
  •  
    " Peter Sunde on April 4, 2015 C: 0 Opinion Ever since someone had the idea of starting a "pirate party," there've been discussions about the necessity for such a party. In the trail of that discussion, there's always been the one about whether the "pirate movement" is alive or not."
Paul Merrell

How Edward Snowden Changed Everything | The Nation - 0 views

  • Ben Wizner, who is perhaps best known as Edward Snowden’s lawyer, directs the American Civil Liberties Union’s Speech, Privacy & Technology Project. Wizner, who joined the ACLU in August 2001, one month before the 9/11 attacks, has been a force in the legal battles against torture, watch lists, and extraordinary rendition since the beginning of the global “war on terror.” Ad Policy On October 15, we met with Wizner in an upstate New York pub to discuss the state of privacy advocacy today. In sometimes sardonic tones, he talked about the transition from litigating on issues of torture to privacy advocacy, differences between corporate and state-sponsored surveillance, recent developments in state legislatures and the federal government, and some of the obstacles impeding civil liberties litigation. The interview has been edited and abridged for publication.
  • en Wizner, who is perhaps best known as Edward Snowden’s lawyer, directs the American Civil Liberties Union’s Speech, Privacy & Technology Project. Wizner, who joined the ACLU in August 2001, one month before the 9/11 attacks, has been a force in the legal battles against torture, watch lists, and extraordinary rendition since the beginning of the global “war on terror.” Ad Policy On October 15, we met with Wizner in an upstate New York pub to discuss the state of privacy advocacy today. In sometimes sardonic tones, he talked about the transition from litigating on issues of torture to privacy advocacy, differences between corporate and state-sponsored surveillance, recent developments in state legislatures and the federal government, and some of the obstacles impeding civil liberties litigation. The interview has been edited and abridged for publication.
  • Many of the technologies, both military technologies and surveillance technologies, that are developed for purposes of policing the empire find their way back home and get repurposed. You saw this in Ferguson, where we had military equipment in the streets to police nonviolent civil unrest, and we’re seeing this with surveillance technologies, where things that are deployed for use in war zones are now commonly in the arsenals of local police departments. For example, a cellphone surveillance tool that we call the StingRay—which mimics a cellphone tower and communicates with all the phones around—was really developed as a military technology to help identify targets. Now, because it’s so inexpensive, and because there is a surplus of these things that are being developed, it ends up getting pushed down into local communities without local democratic consent or control.
  • ...4 more annotations...
  • SG & TP: How do you see the current state of the right to privacy? BW: I joked when I took this job that I was relieved that I was going to be working on the Fourth Amendment, because finally I’d have a chance to win. That was intended as gallows humor; the Fourth Amendment had been a dishrag for the last several decades, largely because of the war on drugs. The joke in civil liberties circles was, “What amendment?” But I was able to make this joke because I was coming to Fourth Amendment litigation from something even worse, which was trying to sue the CIA for torture, or targeted killings, or various things where the invariable outcome was some kind of non-justiciability ruling. We weren’t even reaching the merits at all. It turns out that my gallows humor joke was prescient.
  • The truth is that over the last few years, we’ve seen some of the most important Fourth Amendment decisions from the Supreme Court in perhaps half a century. Certainly, I think the Jones decision in 2012 [U.S. v. Jones], which held that GPS tracking was a Fourth Amendment search, was the most important Fourth Amendment decision since Katz in 1967 [Katz v. United States], in terms of starting a revolution in Fourth Amendment jurisprudence signifying that changes in technology were not just differences in degree, but they were differences in kind, and require the Court to grapple with it in a different way. Just two years later, you saw the Court holding that police can’t search your phone incident to an arrest without getting a warrant [Riley v. California]. Since 2012, at the level of Supreme Court jurisprudence, we’re seeing a recognition that technology has required a rethinking of the Fourth Amendment at the state and local level. We’re seeing a wave of privacy legislation that’s really passing beneath the radar for people who are not paying close attention. It’s not just happening in liberal states like California; it’s happening in red states like Montana, Utah, and Wyoming. And purple states like Colorado and Maine. You see as many libertarians and conservatives pushing these new rules as you see liberals. It really has cut across at least party lines, if not ideologies. My overall point here is that with respect to constraints on government surveillance—I should be more specific—law-enforcement government surveillance—momentum has been on our side in a way that has surprised even me.
  • Do you think that increased privacy protections will happen on the state level before they happen on the federal level? BW: I think so. For example, look at what occurred with the death penalty and the Supreme Court’s recent Eighth Amendment jurisprudence. The question under the Eighth Amendment is, “Is the practice cruel and unusual?” The Court has looked at what it calls “evolving standards of decency” [Trop v. Dulles, 1958]. It matters to the Court, when it’s deciding whether a juvenile can be executed or if a juvenile can get life without parole, what’s going on in the states. It was important to the litigants in those cases to be able to show that even if most states allowed the bad practice, the momentum was in the other direction. The states that were legislating on this most recently were liberalizing their rules, were making it harder to execute people under 18 or to lock them up without the possibility of parole. I think you’re going to see the same thing with Fourth Amendment and privacy jurisprudence, even though the Court doesn’t have a specific doctrine like “evolving standards of decency.” The Court uses this much-maligned test, “Do individuals have a reasonable expectation of privacy?” We’ll advance the argument, I think successfully, that part of what the Court should look at in considering whether an expectation of privacy is reasonable is showing what’s going on in the states. If we can show that a dozen or eighteen state legislatures have enacted a constitutional protection that doesn’t exist in federal constitutional law, I think that that will influence the Supreme Court.
  • The question is will it also influence Congress. I think there the answer is also “yes.” If you’re a member of the House or the Senate from Montana, and you see that your state legislature and your Republican governor have enacted privacy legislation, you’re not going to be worried about voting in that direction. I think this is one of those places where, unlike civil rights, where you saw most of the action at the federal level and then getting forced down to the states, we’re going to see more action at the state level getting funneled up to the federal government.
  •  
    A must-read. Ben Wizner discusses the current climate in the courts in government surveillance cases and how Edward Snowden's disclosures have affected that, and much more. Wizner is not only Edward Snowden's lawyer, he is also the coordinator of all ACLU litigation on electronic surveillance matters.
Paul Merrell

The Wifi Alliance, Coming Soon to Your Neighborhood: 5G Wireless | Global Research - Ce... - 0 views

  • Just as any new technology claims to offer the most advanced development; that their definition of progress will cure society’s ills or make life easier by eliminating the drudgery of antiquated appliances, the Wifi Alliance  was organized as a worldwide wireless network to connect ‘everyone and everything, everywhere” as it promised “improvements to nearly every aspect of daily life.”    The Alliance, which makes no pretense of potential health or environmental concerns, further proclaimed (and they may be correct) that there are “more wifi devices than people on earth”.   It is that inescapable exposure to ubiquitous wireless technologies wherein lies the problem.   
  • Even prior to the 1997 introduction of commercially available wifi devices which has saturated every industrialized country, EMF wifi hot spots were everywhere.  Today with the addition of cell and cordless phones and towers, broadcast antennas, smart meters and the pervasive computer wifi, both adults and especially vulnerable children are surrounded 24-7 by an inescapable presence with little recognition that all radiation exposure is cumulative.    
  • The National Toxicology Program (NTP), a branch of the US National Institute for Health (NIH), conducted the world’s largest study on radiofrequency radiation used by the US telecommunications industry and found a ‘significantly statistical increase in brain and heart cancers” in animals exposed to EMF (electromagnetic fields).  The NTP study confirmed the connection between mobile and wireless phone use and human brain cancer risks and its conclusions were supported by other epidemiological peer-reviewed studies.  Of special note is that studies citing the biological risk to human health were below accepted international exposure standards.    
  •  
    ""…what this means is that the current safety standards as off by a factor of about 7 million.' Pointing out that a recent FCC Chair was a former lobbyist for the telecom industry, "I know how they've attacked various people.  In the U.S. … the funding for the EMF research [by the Environmental Protection Agency] was cut off starting in 1986 … The U.S. Office of Naval Research had been funding a fair amount of research in this area [in the '70s]. They [also] … stopped funding new grants in 1986 …  And then the NIH a few years later followed the same path …" As if all was not reason enough for concern or even downright panic,  the next generation of wireless technology known as 5G (fifth generation), representing the innocuous sounding Internet of Things, promises a quantum leap in power and exceedingly more damaging health impacts with mandatory exposures.      The immense expansion of radiation emissions from the current wireless EMF frequency band and 5G about to be perpetrated on an unsuspecting American public should be criminal.  Developed by the US military as non lethal perimeter and crowd control, the Active Denial System emits a high density, high frequency wireless radiation comparable to 5G and emits radiation in the neighborhood of 90 GHz.    The current Pre 5G, frequency band emissions used in today's commercial wireless range is from 300 Mhz to 3 GHZ as 5G will become the first wireless system to utilize millimeter waves with frequencies ranging from 30 to 300 GHz. One example of the differential is that a current LANS (local area network system) uses 2.4 GHz.  Hidden behind these numbers is an utterly devastating increase in health effects of immeasurable impacts so stunning as to numb the senses. In 2017, the international Environmental Health Trust recommended an EU moratorium "on the roll-out of the fifth generation, 5G, for telecommunication until potential hazards for human health and the environment hav
Paul Merrell

Assange Keeps Warning Of AI Censorship, And It's Time We Started Listening - 0 views

  • Where power is not overtly totalitarian, wealthy elites have bought up all media, first in print, then radio, then television, and used it to advance narratives that are favorable to their interests. Not until humanity gained widespread access to the internet has our species had the ability to freely and easily share ideas and information on a large scale without regulation by the iron-fisted grip of power. This newfound ability arguably had a direct impact on the election for the most powerful elected office in the most powerful government in the world in 2016, as a leak publishing outlet combined with alternative and social media enabled ordinary Americans to tell one another their own stories about what they thought was going on in their country.This newly democratized narrative-generating power of the masses gave those in power an immense fright, and they’ve been working to restore the old order of power controlling information ever since. And the editor-in-chief of the aforementioned leak publishing outlet, WikiLeaks, has been repeatedly trying to warn us about this coming development.
  • In a statement that was recently read during the “Organising Resistance to Internet Censorship” webinar, sponsored by the World Socialist Web Site, Assange warned of how “digital super states” like Facebook and Google have been working to “re-establish discourse control”, giving authority over how ideas and information are shared back to those in power.Assange went on to say that the manipulative attempts of world power structures to regain control of discourse in the information age has been “operating at a scale, speed, and increasingly at a subtlety, that appears likely to eclipse human counter-measures.”What this means is that using increasingly more advanced forms of artificial intelligence, power structures are becoming more and more capable of controlling the ideas and information that people are able to access and share with one another, hide information which goes against the interests of those power structures and elevate narratives which support those interests, all of course while maintaining the illusion of freedom and lively debate.
  • To be clear, this is already happening. Due to a recent shift in Google’s “evaluation methods”, traffic to left-leaning and anti-establishment websites has plummeted, with sites like WikiLeaks, Alternet, Counterpunch, Global Research, Consortium News, Truthout, and WSWS losing up to 70 percent of the views they were getting prior to the changes. Powerful billionaire oligarchs Pierre Omidyar and George Soros are openly financing the development of “an automated fact-checking system” (AI) to hide “fake news” from the public.
  • ...2 more annotations...
  • To make matters even worse, there’s no way to know the exact extent to which this is going on, because we know that we can absolutely count on the digital super states in question to lie about it. In the lead-up to the 2016 election, Twitter CEO Jack Dorsey was asked point-blank if Twitter was obstructing the #DNCLeaks from trending, a hashtag people were using to build awareness of the DNC emails which had just been published by WikiLeaks, and Dorsey flatly denied it. More than a year later, we learned from a prepared testimony before the Senate Subcommittee on Crime and Terrorism by Twitter’s acting general counsel Sean J. Edgett that this was completely false and Twitter had indeed been doing exactly that to protect the interests of US political structures by sheltering the public from information allegedly gathered by Russian hackers.
  • Imagine going back to a world like the Middle Ages where you only knew the things your king wanted you to know, except you could still watch innocuous kitten videos on Youtube. That appears to be where we may be headed, and if that happens the possibility of any populist movement arising to hold power to account may be effectively locked out from the realm of possibility forever.To claim that these powerful new media corporations are just private companies practicing their freedom to determine what happens on their property is to bury your head in the sand and ignore the extent to which these digital super states are already inextricably interwoven with existing power structures. In a corporatist system of government, which America unquestionably has, corporate censorship is government censorship, of an even more pernicious strain than if Jeff Sessions were touring the country burning books. The more advanced artificial intelligence becomes, the more adept these power structures will become at manipulating us. Time to start paying very close attention to this.
Paul Merrell

Symantec: CIA Linked To Cyberattacks In 16 Countries - 0 views

  • Internet and computer security company Symantec has issued a statement today related to the Vault 7 WikiLeaks documents leaked from the CIA, saying that the methods and protocols described in the documents are consistent with cyberattacks they’d been tracking for years. Symantec says they now believe that the CIA hacking tool Fluxwire is a malware that had been known as Corentry, which Symantec had previously attributed to an unknown cyberespionage group called Longhorn, which apparently was the CIA. They described Longhorn as having been active since at least 2011, and responsible for attacks in at least 16 countries across the world, targeting governments and NGOs, as well as financial, energy, and natural resource companies, things that would generally be of interest to a nation-state.
  • While the WikiLeaks themselves have been comparatively short on details, as WikiLeaks continues to share specific vulnerabilities with companies so they can fix them before the details are leaked to the general public, the ability of security companies like Symantec to link the CIA to known hacking operations could prove to be even more enlightening as to the scope of CIA cyber-espionage the world over.
1 - 20 of 570 Next › Last »
Showing 20 items per page