Skip to main content

Home/ Future of the Web/ Group items tagged lies digital

Rss Feed Group items tagged

Paul Merrell

Archiveteam - 0 views

  • HISTORY IS OUR FUTURE And we've been trashing our history Archive Team is a loose collective of rogue archivists, programmers, writers and loudmouths dedicated to saving our digital heritage. Since 2009 this variant force of nature has caught wind of shutdowns, shutoffs, mergers, and plain old deletions - and done our best to save the history before it's lost forever. Along the way, we've gotten attention, resistance, press and discussion, but most importantly, we've gotten the message out: IT DOESN'T HAVE TO BE THIS WAY. This website is intended to be an offloading point and information depot for a number of archiving projects, all related to saving websites or data that is in danger of being lost. Besides serving as a hub for team-based pulling down and mirroring of data, this site will provide advice on managing your own data and rescuing it from the brink of destruction. Currently Active Projects (Get Involved Here!) Archive Team recruiting Want to code for Archive Team? Here's a starting point.
  • Archive Team is a loose collective of rogue archivists, programmers, writers and loudmouths dedicated to saving our digital heritage. Since 2009 this variant force of nature has caught wind of shutdowns, shutoffs, mergers, and plain old deletions - and done our best to save the history before it's lost forever. Along the way, we've gotten attention, resistance, press and discussion, but most importantly, we've gotten the message out: IT DOESN'T HAVE TO BE THIS WAY. This website is intended to be an offloading point and information depot for a number of archiving projects, all related to saving websites or data that is in danger of being lost. Besides serving as a hub for team-based pulling down and mirroring of data, this site will provide advice on managing your own data and rescuing it from the brink of destruction.
  • Who We Are and how you can join our cause! Deathwatch is where we keep track of sites that are sickly, dying or dead. Fire Drill is where we keep track of sites that seem fine but a lot depends on them. Projects is a comprehensive list of AT endeavors. Philosophy describes the ideas underpinning our work. Some Starting Points The Introduction is an overview of basic archiving methods. Why Back Up? Because they don't care about you. Back Up your Facebook Data Learn how to liberate your personal data from Facebook. Software will assist you in regaining control of your data by providing tools for information backup, archiving and distribution. Formats will familiarise you with the various data formats, and how to ensure your files will be readable in the future. Storage Media is about where to get it, what to get, and how to use it. Recommended Reading links to others sites for further information. Frequently Asked Questions is where we answer common questions.
  •  
    The Archive Team Warrior is a virtual archiving appliance. You can run it to help with the ArchiveTeam archiving efforts. It will download sites and upload them to our archive - and it's really easy to do! The warrior is a virtual machine, so there is no risk to your computer. The warrior will only use your bandwidth and some of your disk space. It will get tasks from and report progress to the Tracker. Basic usage The warrior runs on Windows, OS X and Linux using a virtual machine. You'll need one of: VirtualBox (recommended) VMware workstation/player (free-gratis for personal use) See below for alternative virtual machines Partners with and contributes lots of archives to the Wayback Machine. Here's how you can help by contributing some bandwidth if you run an always-on box with an internet connection.
Paul Merrell

EU unveils landmark law curbing power of tech giants | News | DW | 15.12.2020 - 0 views

  • The European Union unveiled landmark legislation on Tuesday that lays out strict rules for tech giants to do business in the bloc. The draft legislation, dubbed the Digital Services Act (DSA) and the Digital Markets Act (DMA), outlines specific regulations that seek to limit the power of global internet firms on the European market. Companies including Google, Apple, Amazon, Facebook and others could face hefty penalties for violating the rules. EU antitrust czar Margrethe Vestager and EU digital chief Thierry Breton presented the draft on Tuesday, after the content of the new rules was leaked to the media on Monday.
  • What's in the draft laws? The dual legislation sets out a list of do's, don'ts and penalties for internet giants: Companies with over 45 million EU users would be designated as digital "gatekeepers" — making them subject to stricter regulations. Firms could be fined up to 10% of their annual turnover for violating competition rules. The could also be required to sell one of their businesses or parts of it (including rights or brands). Platforms that refuse to comply and "endanger people's life and safety" could have their service temporarily suspended "as a last resort." Companies would need to inform the EU ahead of any planned mergers or acquisitions. Certain kinds of data must be shared with regulators and rivals. Companies favoring their own services could be outlawed. Platforms would be more responsible for illegal, disturbing or misleading content.
  • Following the announcement on Tuesday, US internet giant Google criticized the draft legislation, saying it appeared to target specific firms.  "We will carefully study the proposals made by the European Commission over the next few days. However, we are concerned that they seem to specifically target a handful of companies," said Karan Bhatia, the vice president of government affairs and public affairs at Google. Facebook appeared to offer a more conciliatory tone, saying the legislation was "on the right track."
  • ...1 more annotation...
  • The draft still faces a long ratification process, including feedback from the EU's 27 member states and the European Parliament. Company lobbyists and trade associations will also influence the final law. The process is expected to take several months or even a year.
Paul Merrell

Popular Security Software Came Under Relentless NSA and GCHQ Attacks - The Intercept - 0 views

  • The National Security Agency and its British counterpart, Government Communications Headquarters, have worked to subvert anti-virus and other security software in order to track users and infiltrate networks, according to documents from NSA whistleblower Edward Snowden. The spy agencies have reverse engineered software products, sometimes under questionable legal authority, and monitored web and email traffic in order to discreetly thwart anti-virus software and obtain intelligence from companies about security software and users of such software. One security software maker repeatedly singled out in the documents is Moscow-based Kaspersky Lab, which has a holding registered in the U.K., claims more than 270,000 corporate clients, and says it protects more than 400 million people with its products. British spies aimed to thwart Kaspersky software in part through a technique known as software reverse engineering, or SRE, according to a top-secret warrant renewal request. The NSA has also studied Kaspersky Lab’s software for weaknesses, obtaining sensitive customer information by monitoring communications between the software and Kaspersky servers, according to a draft top-secret report. The U.S. spy agency also appears to have examined emails inbound to security software companies flagging new viruses and vulnerabilities.
  • The efforts to compromise security software were of particular importance because such software is relied upon to defend against an array of digital threats and is typically more trusted by the operating system than other applications, running with elevated privileges that allow more vectors for surveillance and attack. Spy agencies seem to be engaged in a digital game of cat and mouse with anti-virus software companies; the U.S. and U.K. have aggressively probed for weaknesses in software deployed by the companies, which have themselves exposed sophisticated state-sponsored malware.
  • The requested warrant, provided under Section 5 of the U.K.’s 1994 Intelligence Services Act, must be renewed by a government minister every six months. The document published today is a renewal request for a warrant valid from July 7, 2008 until January 7, 2009. The request seeks authorization for GCHQ activities that “involve modifying commercially available software to enable interception, decryption and other related tasks, or ‘reverse engineering’ software.”
  • ...9 more annotations...
  • The NSA, like GCHQ, has studied Kaspersky Lab’s software for weaknesses. In 2008, an NSA research team discovered that Kaspersky software was transmitting sensitive user information back to the company’s servers, which could easily be intercepted and employed to track users, according to a draft of a top-secret report. The information was embedded in “User-Agent” strings included in the headers of Hypertext Transfer Protocol, or HTTP, requests. Such headers are typically sent at the beginning of a web request to identify the type of software and computer issuing the request.
  • According to the draft report, NSA researchers found that the strings could be used to uniquely identify the computing devices belonging to Kaspersky customers. They determined that “Kaspersky User-Agent strings contain encoded versions of the Kaspersky serial numbers and that part of the User-Agent string can be used as a machine identifier.” They also noted that the “User-Agent” strings may contain “information about services contracted for or configurations.” Such data could be used to passively track a computer to determine if a target is running Kaspersky software and thus potentially susceptible to a particular attack without risking detection.
  • Another way the NSA targets foreign anti-virus companies appears to be to monitor their email traffic for reports of new vulnerabilities and malware. A 2010 presentation on “Project CAMBERDADA” shows the content of an email flagging a malware file, which was sent to various anti-virus companies by François Picard of the Montréal-based consulting and web hosting company NewRoma. The presentation of the email suggests that the NSA is reading such messages to discover new flaws in anti-virus software. Picard, contacted by The Intercept, was unaware his email had fallen into the hands of the NSA. He said that he regularly sends out notification of new viruses and malware to anti-virus companies, and that he likely sent the email in question to at least two dozen such outfits. He also said he never sends such notifications to government agencies. “It is strange the NSA would show an email like mine in a presentation,” he added.
  • The NSA presentation goes on to state that its signals intelligence yields about 10 new “potentially malicious files per day for malware triage.” This is a tiny fraction of the hostile software that is processed. Kaspersky says it detects 325,000 new malicious files every day, and an internal GCHQ document indicates that its own system “collect[s] around 100,000,000 malware events per day.” After obtaining the files, the NSA analysts “[c]heck Kaspersky AV to see if they continue to let any of these virus files through their Anti-Virus product.” The NSA’s Tailored Access Operations unit “can repurpose the malware,” presumably before the anti-virus software has been updated to defend against the threat.
  • The Project CAMBERDADA presentation lists 23 additional AV companies from all over the world under “More Targets!” Those companies include Check Point software, a pioneering maker of corporate firewalls based Israel, whose government is a U.S. ally. Notably omitted are the American anti-virus brands McAfee and Symantec and the British company Sophos.
  • As government spies have sought to evade anti-virus software, the anti-virus firms themselves have exposed malware created by government spies. Among them, Kaspersky appears to be the sharpest thorn in the side of government hackers. In the past few years, the company has proven to be a prolific hunter of state-sponsored malware, playing a role in the discovery and/or analysis of various pieces of malware reportedly linked to government hackers, including the superviruses Flame, which Kaspersky flagged in 2012; Gauss, also detected in 2012; Stuxnet, discovered by another company in 2010; and Regin, revealed by Symantec. In February, the Russian firm announced its biggest find yet: the “Equation Group,” an organization that has deployed espionage tools widely believed to have been created by the NSA and hidden on hard drives from leading brands, according to Kaspersky. In a report, the company called it “the most advanced threat actor we have seen” and “probably one of the most sophisticated cyber attack groups in the world.”
  • Hacks deployed by the Equation Group operated undetected for as long as 14 to 19 years, burrowing into the hard drive firmware of sensitive computer systems around the world, according to Kaspersky. Governments, militaries, technology companies, nuclear research centers, media outlets and financial institutions in 30 countries were among those reportedly infected. Kaspersky estimates that the Equation Group could have implants in tens of thousands of computers, but documents published last year by The Intercept suggest the NSA was scaling up their implant capabilities to potentially infect millions of computers with malware. Kaspersky’s adversarial relationship with Western intelligence services is sometimes framed in more sinister terms; the firm has been accused of working too closely with the Russian intelligence service FSB. That accusation is partly due to the company’s apparent success in uncovering NSA malware, and partly due to the fact that its founder, Eugene Kaspersky, was educated by a KGB-backed school in the 1980s before working for the Russian military.
  • Kaspersky has repeatedly denied the insinuations and accusations. In a recent blog post, responding to a Bloomberg article, he complained that his company was being subjected to “sensationalist … conspiracy theories,” sarcastically noting that “for some reason they forgot our reports” on an array of malware that trace back to Russian developers. He continued, “It’s very hard for a company with Russian roots to become successful in the U.S., European and other markets. Nobody trusts us — by default.”
  • Documents published with this article: Kaspersky User-Agent Strings — NSA Project CAMBERDADA — NSA NDIST — GCHQ’s Developing Cyber Defence Mission GCHQ Application for Renewal of Warrant GPW/1160 Software Reverse Engineering — GCHQ Reverse Engineering — GCHQ Wiki Malware Analysis & Reverse Engineering — ACNO Skill Levels — GCHQ
Paul Merrell

W3C releases Working Draft for Widgets 1.0: APIs and Events - 0 views

  • This specification defines a set of APIs and events for the Widgets 1.0 Family of Specifications that enable baseline functionality for widgets. The APIs and Events defined by this specification defines, amongst other things, the means to:access the metadata declared in a widget's configuration document, receive events related to changes in the view state of a widget, determine the locale under which a widget is currently running, be notified of events relating to the widget being updated, invoke a widget to open a URL on the system's default browser, requests the user's attention in a device independent manner, and check if any additional APIs requested via the configuration document's feature element have successfully loaded.
  • This specification defines a set of APIs and events for widgets that enable baseline functionality for widgets. Widgets are full-fledged client-side applications that are authored using Web standards. They are typically downloaded and installed on a client machine or device where they typically run as stand-alone applications outside of a Web browser. Examples range from simple clocks, stock tickers, news casters, games and weather forecasters, to complex applications that pull data from multiple sources to be "mashed-up" and presented to a user in some interesting and useful way
  • This specification is part of the Widgets 1.0 family of specifications, which together standardize widgets as a whole. The Widgets 1.0: Packaging and Configuration [Widgets-Packaging] standardizes a Zip-based packaging format, an XML-based configuration document format and a series of steps that user agents follow when processing and verifying various aspects of widgets. The Widgets 1.0: Digital Signature [Widgets-DigSig] specification defines a means for widgets to be digitally signed using a custom profile of the XML-Signature Syntax and Processing Specification. The Widgets: 1.0: Automatic Updates [Widgets-Updates] specification defines a version control model that allows widgets to be kept up-to-date over [HTTP].
Paul Merrell

Internet Archive: Scanning Services - 1 views

  • Digitizing Print Collections with the Internet Archive Open and free online access, permanent storage, unlimited downloads and lifetime file management. We can help digitize your collections in 4 simple steps:
  • In addition to permanent hosting on archive.org, your books will be integrated with Open Library, openlibrary.org, a page on the web for every book.
  • Non-destructive color scanning using our Scribe system at one of our scanning centers across the globe. Complete MARC records, Dublin Core & XML, just 10c USD per image and a small set up charge per item.
  • ...4 more annotations...
  • Create and upload high-quality JP2000 images; persistent identifiers, lifetime hosting of files, lifetime management of file system and file access.
  • Create high quality PDF A files; run OCR across texts to allow "search inside" all books. Add to Internet Archive search engine; display via our open source Book Reader
  • 2,000,000 books online 600 million pages scanned 1,500 book scanned each day 15 million downloads each month 33 scanning centers in 7 countries 3.5 petabytes of storage 8 Gb per second bandwidth
  • Library of Congress Harvard University The New York Public Library Smithsonian Institution The Getty Research Institute University of California University of Toronto Biodiversity Heritage Library Boston Library Consortium C.A.R.L.I. Johns Hopkins University Allen County Public Library Lyrasis Massachusetts Institute of technology State Library of Massachusetts . . . and over 1,000 other Open Content Alliance partners
  •  
    I've been looking for a permanent online home for a couple of historical works I co-authored. My guidiing criterion has been the best chance of the works' long-term survival in a publicly-accessible form after my death. I think I may have just found my solution. 
Gary Edwards

Everything You Need to Know About the Bitcoin Protocol - 0 views

  • . In this research paper we hope to explain that the bitcoin currency itself is ‘just’ the next phase in the evolution of money – from dumb to smart money. It’s the underlying platform, the Bitcoin protocol aka Bitcoin 2.0, that holds the real transformative power. That is where the revolution starts. According to our research there are several reasons why this new technology is going to disrupt our economy and society as we have never experienced before:
  • From dumb to smart money
  • The Bitcoin protocol is the underlying platform that holds the real transformative power and is where the revolution starts. According to our research there are several reasons why this new technology is going to disrupt our economy and society as we have never experienced before:
  • ...2 more annotations...
  • Similar to when the TCP/IP, HTTP and SMTP protocols were still in their infancy; the Bitcoin protocol is currently in a similar evolutionary stage. Contrary to the early days of the Internet, when only a few people had a computer, nowadays everybody has a supercomputer in its pocket. It’s Moore’s Law all over again. Bitcoin is going to disrupt the economy and society with breathtaking speed. For the first time in history technology makes it possible to transfer property rights (such as shares, certificates, digital money, etc.) fast, transparent and very secure. Moreover, these transactions can take place without the involvement of a trusted intermediary such as a government, notary, or bank. Companies and governments are no longer needed as the “middle man” in all kinds of financial agreements. Not only does The Internet of Things give machines a digital identity, the bitcoin API’s (machine-machine interfaces) gives them an economic identity as well. Next to people and corporations, machines will become a new type of agent in the economy.
  • The Bitcoin protocol flips automation upside down. From now on automation within companies can start top down, making the white-collar employees obsolete. Corporate missions can be encoded on top of the protocol. Machines can manage a corporation all by themselves. Bitcoin introduces the world to the new nature of the firm: the Distributed Autonomous Corporation (DAC). This new type of corporation also adds a new perspective to the discussion on technological unemployment. The DAC might even turn technological unemplyment into structural unemployment. Bitcoin is key to the success of the Collaborative Economy. Bitcoin enables a frictionless and transparent way of sharing ideas, media, products, services and technology between people without the interference of corporations and governments.
  •  
    A series of eleven pages discussing Bitcoin and the extraordinary impact it will have on the world economy. Excellent article and a worthy follow up to the previous Marc Andressen discussion of Bitcoin.
  •  
    A series of eleven pages discussing Bitcoin and the extraordinary impact it will have on the world economy. Excellent article and a worthy follow up to the previous Marc Andressen discussion of Bitcoin.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

European Commission publishes guidance on new data protection rules - nsnbc internation... - 0 views

  • The European Commission, on January 24, published its guidance aimed to facilitate a direct and smooth application of the European Union’s new data protection rules across the EU as of 25 May. The Commission also launches a new online tool dedicated to SMEs.
  • With just over 100 days left before the application of the new law, the guidance outlines what the European Commission, national data protection authorities and national administrations, according to the Commission, should still do to bring the preparation to a successful completion. The Commission notes that while the new regulation provides for a single set of rules directly applicable in all Member States, it will still require significant adjustments in certain aspects, like amending existing laws by EU governments or setting up the European Data Protection Board by data protection authorities. The Commission states that the guidance recalls the main innovations, opportunities opened up by the new rules, takes stock of the preparatory work already undertaken and outlines the work still ahead of the European Commission, national data protection authorities and national administrations. Andrus Ansip, European Commission Vice-President for the Digital Single Market, said: “Our digital future can only be built on trust. Everyone’s privacy has to be protected. Strengthened EU data protection rules will become a reality on 25 May. It is a major step forward and we are committed to making it a success for everyone.” Vĕra Jourová, Commissioner for Justice, Consumers and Gender Equality, added:” In today’s world, the way we handle data will determine to a large extent our economic future and personal safety. We need modern rules to respond to new risks, so we call on EU governments, authorities and businesses to use the remaining time efficiently and fulfil their roles in the preparations for the big day.”
  • The guidance recalls the main elements of the new data protection rules: One set of rules across the continent, guaranteeing legal certainty for businesses and the same data protection level across the EU for citizens. Same rules apply to all companies offering services in the EU, even if these companies are based outside the EU. Stronger and new rights for citizens: the right to information, access and the right to be forgotten are strengthened. A new right to data portability allows citizens to move their data from one company to the other. This will give companies new business opportunities. Stronger protection against data breaches: a company experiencing a data breach, which put individuals at risk, has to notify the data protection authority within 72 hours. Rules with teeth and deterrent fines: all data protection authorities will have the power to impose fines for up to EUR 20 million or, in the case of a company, 4% of the worldwide annual turnover.
Paul Merrell

Hey ITU Member States: No More Secrecy, Release the Treaty Proposals | Electronic Front... - 0 views

  • The International Telecommunication Union (ITU) will hold the World Conference on International Telecommunications (WCIT-12) in December in Dubai, an all-important treaty-writing event where ITU Member States will discuss the proposed revisions to the International Telecommunication Regulations (ITR). The ITU is a United Nations agency responsible for international telecom regulation, a bureaucratic, slow-moving, closed regulatory organization that issues treaty-level provisions for international telecommunication networks and services. The ITR, a legally binding international treaty signed by 178 countries, defines the boundaries of ITU’s regulatory authority and provides "general principles" on international telecommunications. However, media reports indicate that some proposed amendments to the ITR—a negotiation that is already well underway—could potentially expand the ITU’s mandate to encompass the Internet.
  • The International Telecommunication Union (ITU) will hold the World Conference on International Telecommunications (WCIT-12) in December in Dubai, an all-important treaty-writing event where ITU Member States will discuss the proposed revisions to the International Telecommunication Regulations (ITR). The ITU is a United Nations agency responsible for international telecom regulation, a bureaucratic, slow-moving, closed regulatory organization that issues treaty-level provisions for international telecommunication networks and services. The ITR, a legally binding international treaty signed by 178 countries, defines the boundaries of ITU’s regulatory authority and provides "general principles" on international telecommunications. However, media reports indicate that some proposed amendments to the ITR—a negotiation that is already well underway—could potentially expand the ITU’s mandate to encompass the Internet. In similar fashion to the secrecy surrounding ACTA and TPP, the ITR proposals are being negotiated in secret, with high barriers preventing access to any negotiating document. While aspiring to be a venue for Internet policy-making, the ITU Member States do not appear to be very open to the idea of allowing all stakeholders (including civil society) to participate. The framework under which the ITU operates does not allow for any form of open participation. Mere access to documents and decision-makers is sold by the ITU to corporate “associate” members at prohibitively high rates. Indeed, the ITU’s business model appears to depend on revenue generation from those seeking to ‘participate’ in its policy-making processes. This revenue-based principle of policy-making is deeply troubling in and of itself, as the objective of policy making should be to reach the best possible outcome.
  • EFF, European Digital Rights, CIPPIC and CDT and a coalition of civil society organizations from around the world are demanding that the ITU Secretary General, the  WCIT-12 Council Working Group, and ITU Member States open up the WCIT-12 and the Council working group negotiations, by immediately releasing all the preparatory materials and Treaty proposals. If it affects the digital rights of citizens across the globe, the public needs to know what is going on and deserves to have a say. The Council Working Group is responsible for the preparatory work towards WCIT-12, setting the agenda for and consolidating input from participating governments and Sector Members. We demand full and meaningful participation for civil society in its own right, and without cost, at the Council Working Group meetings and the WCIT on equal footing with all other stakeholders, including participating governments. A transparent, open process that is inclusive of civil society at every stage is crucial to creating sound policy.
  • ...5 more annotations...
  • Civil society has good reason to be concerned regarding an expanded ITU policy-making role. To begin with, the institution does not appear to have high regard for the distributed multi-stakeholder decision making model that has been integral to the development of an innovative, successful and open Internet. In spite of commitments at WSIS to ensure Internet policy is based on input from all relevant stakeholders, the ITU has consistently put the interests of one stakeholder—Governments—above all others. This is discouraging, as some government interests are inconsistent with an open, innovative network. Indeed, the conditions which have made the Internet the powerful tool it is today emerged in an environment where the interests of all stakeholders are given equal footing, and existing Internet policy-making institutions at least aspire, with varying success, to emulate this equal footing. This formula is enshrined in the Tunis Agenda, which was committed to at WSIS in 2005:
  • 83. Building an inclusive development-oriented Information Society will require unremitting multi-stakeholder effort. We thus commit ourselves to remain fully engaged—nationally, regionally and internationally—to ensure sustainable implementation and follow-up of the outcomes and commitments reached during the WSIS process and its Geneva and Tunis phases of the Summit. Taking into account the multifaceted nature of building the Information Society, effective cooperation among governments, private sector, civil society and the United Nations and other international organizations, according to their different roles and responsibilities and leveraging on their expertise, is essential. 84. Governments and other stakeholders should identify those areas where further effort and resources are required, and jointly identify, and where appropriate develop, implementation strategies, mechanisms and processes for WSIS outcomes at international, regional, national and local levels, paying particular attention to people and groups that are still marginalized in their access to, and utilization of, ICTs.
  • Indeed, the ITU’s current vision of Internet policy-making is less one of distributed decision-making, and more one of ‘taking control.’ For example, in an interview conducted last June with ITU Secretary General Hamadoun Touré, Russian Prime Minister Vladimir Putin raised the suggestion that the union might take control of the Internet: “We are thankful to you for the ideas that you have proposed for discussion,” Putin told Touré in that conversation. “One of them is establishing international control over the Internet using the monitoring and supervisory capabilities of the International Telecommunication Union (ITU).” Perhaps of greater concern are views espoused by the ITU regarding the nature of the Internet. Yesterday, at the World Summit of Information Society Forum, Mr. Alexander Ntoko, head of the Corporate Strategy Division of the ITU, explained the proposals made during the preparatory process for the WCIT, outlining a broad set of topics that can seriously impact people's rights. The categories include "security," "interoperability" and "quality of services," and the possibility that ITU recommendations and regulations will be not only binding on the world’s nations, but enforced.
  • Rights to online expression are unlikely to fare much better than privacy under an ITU model. During last year’s IGF in Kenya, a voluntary code of conduct was issued to further restrict free expression online. A group of nations (including China, the Russian Federation, Tajikistan and Uzbekistan) released a Resolution for the UN General Assembly titled, “International Code of Conduct for Information Security.”  The Code seems to be designed to preserve and protect national powers in information and communication. In it, governments pledge to curb “the dissemination of information that incites terrorism, secessionism or extremism or that undermines other countries’ political, economic and social stability, as well as their spiritual and cultural environment.” This overly broad provision accords any state the right to censor or block international communications, for almost any reason.
  • EFF Joins Coalition Denouncing Secretive WCIT Planning Process June 2012 Congressional Witnesses Agree: Multistakeholder Processes Are Right for Internet Regulation June 2012 Widespread Participation Is Key in Internet Governance July 2012 Blogging ITU: Internet Users Will Be Ignored Again if Flawed ITU Proposals Gain Traction June 2012 Global Telecom Governance Debated at European Parliament Workshop
Paul Merrell

He Was a Hacker for the NSA and He Was Willing to Talk. I Was Willing to Listen. - 2 views

  • he message arrived at night and consisted of three words: “Good evening sir!” The sender was a hacker who had written a series of provocative memos at the National Security Agency. His secret memos had explained — with an earthy use of slang and emojis that was unusual for an operative of the largest eavesdropping organization in the world — how the NSA breaks into the digital accounts of people who manage computer networks, and how it tries to unmask people who use Tor to browse the web anonymously. Outlining some of the NSA’s most sensitive activities, the memos were leaked by Edward Snowden, and I had written about a few of them for The Intercept. There is no Miss Manners for exchanging pleasantries with a man the government has trained to be the digital equivalent of a Navy SEAL. Though I had initiated the contact, I was wary of how he might respond. The hacker had publicly expressed a visceral dislike for Snowden and had accused The Intercept of jeopardizing lives by publishing classified information. One of his memos outlined the ways the NSA reroutes (or “shapes”) the internet traffic of entire countries, and another memo was titled “I Hunt Sysadmins.” I felt sure he could hack anyone’s computer, including mine. Good evening sir!
  • The sender was a hacker who had written a series of provocative memos at the National Security Agency. His secret memos had explained — with an earthy use of slang and emojis that was unusual for an operative of the largest eavesdropping organization in the world — how the NSA breaks into the digital accounts of people who manage computer networks, and how it tries to unmask people who use Tor to browse the web anonymously. Outlining some of the NSA’s most sensitive activities, the memos were leaked by Edward Snowden, and I had written about a few of them for The Intercept. There is no Miss Manners for exchanging pleasantries with a man the government has trained to be the digital equivalent of a Navy SEAL. Though I had initiated the contact, I was wary of how he might respond. The hacker had publicly expressed a visceral dislike for Snowden and had accused The Intercept of jeopardizing lives by publishing classified information. One of his memos outlined the ways the NSA reroutes (or “shapes”) the internet traffic of entire countries, and another memo was titled “I Hunt Sysadmins.” I felt sure he could hack anyone’s computer, including mine.
  • I got lucky with the hacker, because he recently left the agency for the cybersecurity industry; it would be his choice to talk, not the NSA’s. Fortunately, speaking out is his second nature.
  • ...7 more annotations...
  • He agreed to a video chat that turned into a three-hour discussion sprawling from the ethics of surveillance to the downsides of home improvements and the difficulty of securing your laptop.
  • In recent years, two developments have helped make hacking for the government a lot more attractive than hacking for yourself. First, the Department of Justice has cracked down on freelance hacking, whether it be altruistic or malignant. If the DOJ doesn’t like the way you hack, you are going to jail. Meanwhile, hackers have been warmly invited to deploy their transgressive impulses in service to the homeland, because the NSA and other federal agencies have turned themselves into licensed hives of breaking into other people’s computers. For many, it’s a techno sandbox of irresistible delights, according to Gabriella Coleman, a professor at McGill University who studies hackers. “The NSA is a very exciting place for hackers because you have unlimited resources, you have some of the best talent in the world, whether it’s cryptographers or mathematicians or hackers,” she said. “It is just too intellectually exciting not to go there.”
  • The Lamb’s memos on cool ways to hunt sysadmins triggered a strong reaction when I wrote about them in 2014 with my colleague Ryan Gallagher. The memos explained how the NSA tracks down the email and Facebook accounts of systems administrators who oversee computer networks. After plundering their accounts, the NSA can impersonate the admins to get into their computer networks and pilfer the data flowing through them. As the Lamb wrote, “sys admins generally are not my end target. My end target is the extremist/terrorist or government official that happens to be using the network … who better to target than the person that already has the ‘keys to the kingdom’?” Another of his NSA memos, “Network Shaping 101,” used Yemen as a theoretical case study for secretly redirecting the entirety of a country’s internet traffic to NSA servers.
  • “If I turn the tables on you,” I asked the Lamb, “and say, OK, you’re a target for all kinds of people for all kinds of reasons. How do you feel about being a target and that kind of justification being used to justify getting all of your credentials and the keys to your kingdom?” The Lamb smiled. “There is no real safe, sacred ground on the internet,” he replied. “Whatever you do on the internet is an attack surface of some sort and is just something that you live with. Any time that I do something on the internet, yeah, that is on the back of my mind. Anyone from a script kiddie to some random hacker to some other foreign intelligence service, each with their different capabilities — what could they be doing to me?”
  • “You know, the situation is what it is,” he said. “There are protocols that were designed years ago before anybody had any care about security, because when they were developed, nobody was foreseeing that they would be taken advantage of. … A lot of people on the internet seem to approach the problem [with the attitude of] ‘I’m just going to walk naked outside of my house and hope that nobody looks at me.’ From a security perspective, is that a good way to go about thinking? No, horrible … There are good ways to be more secure on the internet. But do most people use Tor? No. Do most people use Signal? No. Do most people use insecure things that most people can hack? Yes. Is that a bash against the intelligence community that people use stuff that’s easily exploitable? That’s a hard argument for me to make.”
  • I mentioned that lots of people, including Snowden, are now working on the problem of how to make the internet more secure, yet he seemed to do the opposite at the NSA by trying to find ways to track and identify people who use Tor and other anonymizers. Would he consider working on the other side of things? He wouldn’t rule it out, he said, but dismally suggested the game was over as far as having a liberating and safe internet, because our laptops and smartphones will betray us no matter what we do with them. “There’s the old adage that the only secure computer is one that is turned off, buried in a box ten feet underground, and never turned on,” he said. “From a user perspective, someone trying to find holes by day and then just live on the internet by night, there’s the expectation [that] if somebody wants to have access to your computer bad enough, they’re going to get it. Whether that’s an intelligence agency or a cybercrimes syndicate, whoever that is, it’s probably going to happen.”
  • There are precautions one can take, and I did that with the Lamb. When we had our video chat, I used a computer that had been wiped clean of everything except its operating system and essential applications. Afterward, it was wiped clean again. My concern was that the Lamb might use the session to obtain data from or about the computer I was using; there are a lot of things he might have tried, if he was in a scheming mood. At the end of our three hours together, I mentioned to him that I had taken these precautions—and he approved. “That’s fair,” he said. “I’m glad you have that appreciation. … From a perspective of a journalist who has access to classified information, it would be remiss to think you’re not a target of foreign intelligence services.” He was telling me the U.S. government should be the least of my worries. He was trying to help me. Documents published with this article: Tracking Targets Through Proxies & Anonymizers Network Shaping 101 Shaping Diagram I Hunt Sys Admins (first published in 2014)
Paul Merrell

Use Tor or 'EXTREMIST' Tails Linux? Congrats, you're on the NSA's list * The Register - 0 views

  • Alleged leaked documents about the NSA's XKeyscore snooping software appear to show the paranoid agency is targeting Tor and Tails users, Linux Journal readers – and anyone else interested in online privacy.Apparently, this configuration file for XKeyscore is in the divulged data, which was obtained and studied by members of the Tor project and security specialists for German broadcasters NDR and WDR. <a href="http://pubads.g.doubleclick.net/gampad/jump?iu=/6978/reg_security/front&sz=300x250%7C300x600&tile=3&c=33U7ZK6qwQrMkAACSrTugAAAP1&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" target="_blank"> <img src="http://pubads.g.doubleclick.net/gampad/ad?iu=/6978/reg_security/front&sz=300x250%7C300x600&tile=3&c=33U7ZK6qwQrMkAACSrTugAAAP1&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" alt=""></a> In their analysis of the alleged top-secret documents, they claim the NSA is, among other things:Specifically targeting Tor directory servers Reading email contents for mentions of Tor bridges Logging IP addresses used to search for privacy-focused websites and software And possibly breaking international law in doing so. We already know from leaked Snowden documents that Western intelligence agents hate Tor for its anonymizing abilities. But what the aforementioned leaked source code, written in a rather strange custom language, shows is that not only is the NSA targeting the anonymizing network Tor specifically, it is also taking digital fingerprints of any netizens who are remotely interested in privacy.
  • These include readers of the Linux Journal site, anyone visiting the website for the Tor-powered Linux operating system Tails – described by the NSA as "a comsec mechanism advocated by extremists on extremist forums" – and anyone looking into combining Tails with the encryption tool Truecrypt.If something as innocuous as Linux Journal is on the NSA's hit list, it's a distinct possibility that El Reg is too, particularly in light of our recent exclusive report on GCHQ – which led to a Ministry of Defence advisor coming round our London office for a chat.
  • If you take even the slightest interest in online privacy or have Googled a Linux Journal article about a broken package, you are earmarked in an NSA database for further surveillance, according to these latest leaks.This is assuming the leaked file is genuine, of course.Other monitored sites, we're told, include HotSpotShield, FreeNet, Centurian, FreeProxies.org, MegaProxy, privacy.li and an anonymous email service called MixMinion. The IP address of computer users even looking at these sites is recorded and stored on the NSA's servers for further analysis, and it's up to the agency how long it keeps that data.The XKeyscore code, we're told, includes microplugins that target Tor servers in Germany, at MIT in the United States, in Sweden, in Austria, and in the Netherlands. In doing so it may not only fall foul of German law but also the US's Fourth Amendment.
  • ...2 more annotations...
  • The nine Tor directory servers receive especially close monitoring from the NSA's spying software, which states the "goal is to find potential Tor clients connecting to the Tor directory servers." Tor clients linking into the directory servers are also logged."This shows that Tor is working well enough that Tor has become a target for the intelligence services," said Sebastian Hahn, who runs one of the key Tor servers. "For me this means that I will definitely go ahead with the project.”
  • While the German reporting team has published part of the XKeyscore scripting code, it doesn't say where it comes from. NSA whistleblower Edward Snowden would be a logical pick, but security experts are not so sure."I do not believe that this came from the Snowden documents," said security guru Bruce Schneier. "I also don't believe the TAO catalog came from the Snowden documents. I think there's a second leaker out there."If so, the NSA is in for much more scrutiny than it ever expected.
Paul Merrell

BBC News - GCHQ's Robert Hannigan says tech firms 'in denial' on extremism - 0 views

  • Web giants such as Twitter, Facebook and WhatsApp have become "command-and-control networks... for terrorists and criminals", GCHQ's new head has said. Islamic State extremists had "embraced" the web but some companies remained "in denial" over the problem, Robert Hannigan wrote in the Financial Times. He called for them to do more to co-operate with security services. However, civil liberties campaigners said the companies were already working with the intelligence agencies. None of the major tech firms has yet responded to Mr Hannigan's comments.
  • GCHQ, terrorists, and the internet: what are the issues? GCHQ v tech firms: Internet reacts Change at the top for Britain's
  • Mr Hannigan said IS had "embraced the web as a noisy channel in which to promote itself, intimidate people, and radicalise new recruits." The "security of its communications" added another challenge to agencies such as GCHQ, he said - adding that techniques for encrypting - or digitally scrambling - messages "which were once the preserve of the most sophisticated criminals or nation states now come as standard". GCHQ and its sister agencies, MI5 and the Secret Intelligence Service, could not tackle these challenges "at scale" without greater support from the private sector, including the largest US technology companies which dominate the web, he wrote.
  •  
    What I want to know is what we're going to do with that NSA data center at Bluffdale, Utah, after the NSA is abolished? Maybe give it to the Internet Archive?
Paul Merrell

Information Warfare: Automated Propaganda and Social Media Bots | Global Research - 0 views

  • NATO has announced that it is launching an “information war” against Russia. The UK publicly announced a battalion of keyboard warriors to spread disinformation. It’s well-documented that the West has long used false propaganda to sway public opinion. Western military and intelligence services manipulate social media to counter criticism of Western policies. Such manipulation includes flooding social media with comments supporting the government and large corporations, using armies of sock puppets, i.e. fake social media identities. See this, this, this, this and this. In 2013, the American Congress repealed the formal ban against the deployment of propaganda against U.S. citizens living on American soil. So there’s even less to constrain propaganda than before.
  • Information warfare for propaganda purposes also includes: The Pentagon, Federal Reserve and other government entities using software to track discussion of political issues … to try to nip dissent in the bud before it goes viral “Controlling, infiltrating, manipulating and warping” online discourse Use of artificial intelligence programs to try to predict how people will react to propaganda
  • Some of the propaganda is spread by software programs. We pointed out 6 years ago that people were writing scripts to censor hard-hitting information from social media. One of America’s top cyber-propagandists – former high-level military information officer Joel Harding – wrote in December: I was in a discussion today about information being used in social media as a possible weapon.  The people I was talking with have a tool which scrapes social media sites, gauges their sentiment and gives the user the opportunity to automatically generate a persuasive response. Their tool is called a “Social Networking Influence Engine”. *** The implications seem to be profound for the information environment. *** The people who own this tool are in the civilian world and don’t even remotely touch the defense sector, so getting approval from the US Department of State might not even occur to them.
  • ...2 more annotations...
  • How Can This Real? Gizmodo reported in 2010: Software developer Nigel Leck got tired rehashing the same 140-character arguments against climate change deniers, so he programmed a bot that does the work for him. With citations! Leck’s bot, @AI_AGW, doesn’t just respond to arguments directed at Leck himself, it goes out and picks fights. Every five minutes it trawls Twitter for terms and phrases that commonly crop up in Tweets that refute human-caused climate change. It then searches its database of hundreds to find a counter-argument best suited for that tweet—usually a quick statement and a link to a scientific source. As can be the case with these sorts of things, many of the deniers don’t know they’ve been targeted by a robot and engage AI_AGW in debate. The bot will continue to fire back canned responses that best fit the interlocutor’s line of debate—Leck says this goes on for days, in some cases—and the bot’s been outfitted with a number of responses on the topic of religion, where the arguments unsurprisingly often end up. Technology has come a long way in the past 5 years. So if a lone programmer could do this 5 years ago, imagine what he could do now. And the big players have a lot more resources at their disposal than a lone climate activist/software developer does.  For example, a government expert told the Washington Post that the government “quite literally can watch your ideas form as you type” (and see this).  So if the lone programmer is doing it, it’s not unreasonable to assume that the big boys are widely doing it.
  • How Effective Are Automated Comments? Unfortunately, this is more effective than you might assume … Specifically, scientists have shown that name-calling and swearing breaks down people’s ability to think rationally … and intentionally sowing discord and posting junk comments to push down insightful comments  are common propaganda techniques. Indeed, an automated program need not even be that sophisticated … it can copy a couple of words from the main post or a comment, and then spew back one or more radioactive labels such as “terrorist”, “commie”, “Russia-lover”, “wimp”, “fascist”, “loser”, “traitor”, “conspiratard”, etc. Given that Harding and his compadres consider anyone who questions any U.S. policies as an enemy of the state  – as does the Obama administration (and see this) – many honest, patriotic writers and commenters may be targeted for automated propaganda comments.
Paul Merrell

The Million Dollar Dissident: NSO Group's iPhone Zero-Days used against a UAE Human Rig... - 0 views

  • 1. Executive Summary Ahmed Mansoor is an internationally recognized human rights defender, based in the United Arab Emirates (UAE), and recipient of the Martin Ennals Award (sometimes referred to as a “Nobel Prize for human rights”).  On August 10 and 11, 2016, Mansoor received SMS text messages on his iPhone promising “new secrets” about detainees tortured in UAE jails if he clicked on an included link. Instead of clicking, Mansoor sent the messages to Citizen Lab researchers.  We recognized the links as belonging to an exploit infrastructure connected to NSO Group, an Israel-based “cyber war” company that sells Pegasus, a government-exclusive “lawful intercept” spyware product.  NSO Group is reportedly owned by an American venture capital firm, Francisco Partners Management. The ensuing investigation, a collaboration between researchers from Citizen Lab and from Lookout Security, determined that the links led to a chain of zero-day exploits (“zero-days”) that would have remotely jailbroken Mansoor’s stock iPhone 6 and installed sophisticated spyware.  We are calling this exploit chain Trident.  Once infected, Mansoor’s phone would have become a digital spy in his pocket, capable of employing his iPhone’s camera and microphone to snoop on activity in the vicinity of the device, recording his WhatsApp and Viber calls, logging messages sent in mobile chat apps, and tracking his movements.   We are not aware of any previous instance of an iPhone remote jailbreak used in the wild as part of a targeted attack campaign, making this a rare find.
  • The Trident Exploit Chain: CVE-2016-4657: Visiting a maliciously crafted website may lead to arbitrary code execution CVE-2016-4655: An application may be able to disclose kernel memory CVE-2016-4656: An application may be able to execute arbitrary code with kernel privileges Once we confirmed the presence of what appeared to be iOS zero-days, Citizen Lab and Lookout quickly initiated a responsible disclosure process by notifying Apple and sharing our findings. Apple responded promptly, and notified us that they would be addressing the vulnerabilities. We are releasing this report to coincide with the availability of the iOS 9.3.5 patch, which blocks the Trident exploit chain by closing the vulnerabilities that NSO Group appears to have exploited and sold to remotely compromise iPhones. Recent Citizen Lab research has shown that many state-sponsored spyware campaigns against civil society groups and human rights defenders use “just enough” technical sophistication, coupled with carefully planned deception. This case demonstrates that not all threats follow this pattern.  The iPhone has a well-deserved reputation for security.  As the iPhone platform is tightly controlled by Apple, technically sophisticated exploits are often required to enable the remote installation and operation of iPhone monitoring tools. These exploits are rare and expensive. Firms that specialize in acquiring zero-days often pay handsomely for iPhone exploits.  One such firm, Zerodium, acquired an exploit chain similar to the Trident for one million dollars in November 2015. The high cost of iPhone zero-days, the apparent use of NSO Group’s government-exclusive Pegasus product, and prior known targeting of Mansoor by the UAE government provide indicators that point to the UAE government as the likely operator behind the targeting. Remarkably, this case marks the third commercial “lawful intercept” spyware suite employed in attempts to compromise Mansoor.  In 2011, he was targeted with FinFisher’s FinSpy spyware, and in 2012 he was targeted with Hacking Team’s Remote Control System.  Both Hacking Team and FinFisher have been the object of several years of revelations highlighting the misuse of spyware to compromise civil society groups, journalists, and human rights workers.
Gonzalo San Gil, PhD.

The 13 Most Insidious, Pervasive Lies of the Modern Music Industry... - Digital Music N... - 1 views

  •  
    "Wednesday, September 25, 2013 by Paul Resnikoff It was the future we all wanted so desperately to come true…"
  •  
    "Wednesday, September 25, 2013 by Paul Resnikoff It was the future we all wanted so desperately to come true…"
Gonzalo San Gil, PhD.

Be warned of Digital Deception ;) - YouTube - 2 views

  •  
    [You must see this! Does this make you wonder how much footage from prominent world events is fake? How much fake news are we fed? Dictators, terrorists, riot...]
Gary Edwards

Blog | Spritz - 0 views

  • Therein lies one of the biggest problems with traditional RSVP. Each time you see text that is not centered properly on the ORP position, your eyes naturally will look for the ORP to process the word and understand its meaning. This requisite eye movement creates a “saccade”, a physical eye movement caused by your eyes taking a split second to find the proper ORP for a word. Every saccade has a penalty in both time and comprehension, especially when you start to speed up reading. Some saccades are considered by your brain to be “normal” during reading, such as when you move your eye from left to right to go from one ORP position to the next ORP position while reading a book. Other saccades are not normal to your brain during reading, such as when you move your eyes right to left to spot an ORP. This eye movement is akin to trying to read a line of text backwards. In normal reading, you normally won’t saccade right-to-left unless you encounter a word that your brain doesn’t already know and you go back for another look; those saccades will increase based on the difficulty of the text being read and the percentage of words within it that you already know. And the math doesn’t look good, either. If you determined the length of all the words in a given paragraph, you would see that, depending on the language you’re reading, there is a low (less than 15%) probability of two adjacent words being the same length and not requiring a saccade when they are shown to you one at a time. This means you move your eyes on a regular basis with traditional RSVP! In fact, you still move them with almost every word. In general, left-to-right saccades contribute to slower reading due to the increased travel time for the eyeballs, while right-to-left saccades are discombobulating for many people, especially at speed. It’s like reading a lot of text that contains words you don’t understand only you DO understand the words! The experience is frustrating to say the least.
  • In addition to saccading, another issue with RSVP is associated with “foveal vision,” the area in focus when you look at a sentence. This distance defines the number of letters on which your eyes can sharply focus as you read. Its companion is called “parafoveal vision” and refers to the area outside foveal vision that cannot be seen sharply.
  •  
    "To understand Spritz, you must understand Rapid Serial Visual Presentation (RSVP). RSVP is a common speed-reading technique used today. However, RSVP was originally developed for psychological experiments to measure human reactions to content being read. When RSVP was created, there wasn't much digital content and most people didn't have access to it anyway. The internet didn't even exist yet. With traditional RSVP, words are displayed either left-aligned or centered. Figure 1 shows an example of a center-aligned RSVP, with a dashed line on the center axis. When you read a word, your eyes naturally fixate at one point in that word, which visually triggers the brain to recognize the word and process its meaning. In Figure 1, the preferred fixation point (character) is indicated in red. In this figure, the Optimal Recognition Position (ORP) is different for each word. For example, the ORP is only in the middle of a 3-letter word. As the length of a word increases, the percentage that the ORP shifts to the left of center also increases. The longer the word, the farther to the left of center your eyes must move to locate the ORP."
Paul Merrell

M of A - Assad Says The "Boy In The Ambulance" Is Fake - This Proves It - 0 views

  • Re: Major net hack - its not necessarily off topic. .gov is herding web sites into it's own little DNS animal farms so it can properly protect the public from that dangerous 'information' stuff in time of emergency. CloudFlare is the biggest abattoir... er, animal farm. CloudFlare is kind of like a protection racket. If you pay their outrageous fees, you will be 'protected' from DDoS attacks. Since CloudFlare is the preferred covert .gov tool of censorship and content control (when things go south), they are trying to drive as many sites as possible into their digital panopticons. Who the hell is Cloudflare? ISUCKER: BIG BROTHER INTERNET CULTURE On top of that, CloudFlare’s CEO Matthew Prince made a weird, glib admission that he decided to start the company only after the Department of Homeland Security gave him a call in 2007 and suggested he take the technology behind Project Honey Pot one step further… And that makes CloudFlare a whole different story: People who sign up for the service are allowing CloudFlare to monitor, observe and scrutinize all of their site’s traffic, which makes it much easier for intel or law enforcement agencies to collect info on websites and without having to hack or request the logs from each hosting company separately. But there’s more. Because CloudFlare doesn’t just passively monitor internet traffic but works like a dynamic firewall to selectively block traffic from sources it deems to be “hostile,” website operators are giving it a whole lotta power over who gets to see their content. The whole point of CloudFlare is to restrict access to websites from specific locations/IP addresses on the fly, without notifying or bothering the website owner with the details. It’s all boils down to a question of trust, as in: do you trust a shady company with known intel/law enforcement connections to make that decision?
  • And here is an added bonus for the paranoid: Because CloudFlare partially caches websites and delivers them to web surfers via its own servers, the company also has the power to serve up redacted versions of the content to specific users. CloudFlare is perfect: it can implement censorship on the fly, without anyone getting wise to it! Right now CloudFlare says it monitors nearly 1/5 of all Internet visits. [<-- this] An astounding claim for a company most people haven’t even heard of. And techie bloggers seem very excited about getting as much Internet traffic routed through them as possible! See? Plausable deniability. A couple of degrees of separation. Yet when the Borg Queen wants to start WWIII next year, she can order the DHS Stazi to order outfits like CloudFlare to do the proper 'shaping' of internet traffic to filter out unwanted information. How far is any expose of propaganda like Dusty Boy going to happen if nobody can get to sites like MoA? You'll be able to get to all kinds of tweets and NGO sites crying about Dusty Boy 2.0, but you won't see a tweet or a web site calling them out on their lies. Will you even know they interviewed Assad? Will you know the activist 'photographer' is a paid NGO shill or that he's pals with al Zenki? Nope, not if .gov can help it.
Paul Merrell

NZ Prime Minister John Key Retracts Vow to Resign if Mass Surveillance Is Shown - 0 views

  • In August 2013, as evidence emerged of the active participation by New Zealand in the “Five Eyes” mass surveillance program exposed by Edward Snowden, the country’s conservative Prime Minister, John Key, vehemently denied that his government engages in such spying. He went beyond mere denials, expressly vowing to resign if it were ever proven that his government engages in mass surveillance of New Zealanders. He issued that denial, and the accompanying resignation vow, in order to reassure the country over fears provoked by a new bill he advocated to increase the surveillance powers of that country’s spying agency, Government Communications Security Bureau (GCSB) — a bill that passed by one vote thanks to the Prime Minister’s guarantees that the new law would not permit mass surveillance.
  • Since then, a mountain of evidence has been presented that indisputably proves that New Zealand does exactly that which Prime Minister Key vehemently denied — exactly that which he said he would resign if it were proven was done. Last September, we reported on a secret program of mass surveillance at least partially implemented by the Key government that was designed to exploit the very law that Key was publicly insisting did not permit mass surveillance. At the time, Snowden, citing that report as well as his own personal knowledge of GCSB’s participation in the mass surveillance tool XKEYSCORE, wrote in an article for The Intercept: Let me be clear: any statement that mass surveillance is not performed in New Zealand, or that the internet communications are not comprehensively intercepted and monitored, or that this is not intentionally and actively abetted by the GCSB, is categorically false. . . . The prime minister’s claim to the public, that “there is no and there never has been any mass surveillance” is false. The GCSB, whose operations he is responsible for, is directly involved in the untargeted, bulk interception and algorithmic analysis of private communications sent via internet, satellite, radio, and phone networks.
  • A series of new reports last week by New Zealand journalist Nicky Hager, working with my Intercept colleague Ryan Gallagher, has added substantial proof demonstrating GCSB’s widespread use of mass surveillance. An article last week in The New Zealand Herald demonstrated that “New Zealand’s electronic surveillance agency, the GCSB, has dramatically expanded its spying operations during the years of John Key’s National Government and is automatically funnelling vast amounts of intelligence to the US National Security Agency.” Specifically, its “intelligence base at Waihopai has moved to ‘full-take collection,’ indiscriminately intercepting Asia-Pacific communications and providing them en masse to the NSA through the controversial NSA intelligence system XKeyscore, which is used to monitor emails and internet browsing habits.” Moreover, the documents “reveal that most of the targets are not security threats to New Zealand, as has been suggested by the Government,” but “instead, the GCSB directs its spying against a surprising array of New Zealand’s friends, trading partners and close Pacific neighbours.” A second report late last week published jointly by Hager and The Intercept detailed the role played by GCSB’s Waihopai base in aiding NSA’s mass surveillance activities in the Pacific (as Hager was working with The Intercept on these stories, his house was raided by New Zealand police for 10 hours, ostensibly to find Hager’s source for a story he published that was politically damaging to Key).
  • ...6 more annotations...
  • That the New Zealand government engages in precisely the mass surveillance activities Key vehemently denied is now barely in dispute. Indeed, a former director of GCSB under Key, Sir Bruce Ferguson, while denying any abuse of New Zealander’s communications, now admits that the agency engages in mass surveillance.
  • Meanwhile, Russel Norman, the head of the country’s Green Party, said in response to these stories that New Zealand is “committing crimes” against its neighbors in the Pacific by subjecting them to mass surveillance, and insists that the Key government broke the law because that dragnet necessarily includes the communications of New Zealand citizens when they travel in the region.
  • So now that it’s proven that New Zealand does exactly that which Prime Minister Key vowed would cause him to resign if it were proven, is he preparing his resignation speech? No: that’s something a political official with a minimal amount of integrity would do. Instead — even as he now refuses to say what he has repeatedly said before: that GCSB does not engage in mass surveillance — he’s simply retracting his pledge as though it were a minor irritant, something to be casually tossed aside:
  • When asked late last week whether New Zealanders have a right to know what their government is doing in the realm of digital surveillance, the Prime Minister said: “as a general rule, no.” And he expressly refuses to say whether New Zealand is doing that which he swore repeatedly it was not doing, as this excellent interview from Radio New Zealand sets forth: Interviewer: “Nicky Hager’s revelations late last week . . . have stoked fears that New Zealanders’ communications are being indiscriminately caught in that net. . . . The Prime Minister, John Key, has in the past promised to resign if it were found to be mass surveillance of New Zealanders . . . Earlier, Mr. Key was unable to give me an assurance that mass collection of communications from New Zealanders in the Pacific was not taking place.” PM Key: “No, I can’t. I read the transcript [of former GCSB Director Bruce Ferguson’s interview] – I didn’t hear the interview – but I read the transcript, and you know, look, there’s a variety of interpretations – I’m not going to critique–”
  • Interviewer: “OK, I’m not asking for a critique. Let’s listen to what Bruce Ferguson did tell us on Friday:” Ferguson: “The whole method of surveillance these days, is sort of a mass collection situation – individualized: that is mission impossible.” Interviewer: “And he repeated that several times, using the analogy of a net which scoops up all the information. . . . I’m not asking for a critique with respect to him. Can you confirm whether he is right or wrong?” Key: “Uh, well I’m not going to go and critique the guy. And I’m not going to give a view of whether he’s right or wrong” . . . . Interviewer: “So is there mass collection of personal data of New Zealand citizens in the Pacific or not?” Key: “I’m just not going to comment on where we have particular targets, except to say that where we go and collect particular information, there is always a good reason for that.”
  • From “I will resign if it’s shown we engage in mass surveillance of New Zealanders” to “I won’t say if we’re doing it” and “I won’t quit either way despite my prior pledges.” Listen to the whole interview: both to see the type of adversarial questioning to which U.S. political leaders are so rarely subjected, but also to see just how obfuscating Key’s answers are. The history of reporting from the Snowden archive has been one of serial dishonesty from numerous governments: such as the way European officials at first pretended to be outraged victims of NSA only for it to be revealed that, in many ways, they are active collaborators in the very system they were denouncing. But, outside of the U.S. and U.K. itself, the Key government has easily been the most dishonest over the last 20 months: one of the most shocking stories I’ve seen during this time was how the Prime Minister simultaneously plotted in secret to exploit the 2013 proposed law to implement mass surveillance at exactly the same time that he persuaded the public to support it by explicitly insisting that it would not allow mass surveillance. But overtly reneging on a public pledge to resign is a new level of political scandal. Key was just re-elected for his third term, and like any political official who stays in power too long, he has the despot’s mentality that he’s beyond all ethical norms and constraints. But by the admission of his own former GCSB chief, he has now been caught red-handed doing exactly that which he swore to the public would cause him to resign if it were proven. If nothing else, the New Zealand media ought to treat that public deception from its highest political official with the level of seriousness it deserves.
  •  
    It seems the U.S. is not the only nation that has liars for head of state. 
1 - 19 of 19
Showing 20 items per page