Skip to main content

Home/ Future of the Web/ Group items tagged justification

Rss Feed Group items tagged

Paul Merrell

UN Report Finds Mass Surveillance Violates International Treaties and Privacy Rights - ... - 0 views

  • The United Nations’ top official for counter-terrorism and human rights (known as the “Special Rapporteur”) issued a formal report to the U.N. General Assembly today that condemns mass electronic surveillance as a clear violation of core privacy rights guaranteed by multiple treaties and conventions. “The hard truth is that the use of mass surveillance technology effectively does away with the right to privacy of communications on the Internet altogether,” the report concluded. Central to the Rapporteur’s findings is the distinction between “targeted surveillance” — which “depend[s] upon the existence of prior suspicion of the targeted individual or organization” — and “mass surveillance,” whereby “states with high levels of Internet penetration can [] gain access to the telephone and e-mail content of an effectively unlimited number of users and maintain an overview of Internet activity associated with particular websites.” In a system of “mass surveillance,” the report explained, “all of this is possible without any prior suspicion related to a specific individual or organization. The communications of literally every Internet user are potentially open for inspection by intelligence and law enforcement agencies in the States concerned.”
  • Mass surveillance thus “amounts to a systematic interference with the right to respect for the privacy of communications,” it declared. As a result, “it is incompatible with existing concepts of privacy for States to collect all communications or metadata all the time indiscriminately.” In concluding that mass surveillance impinges core privacy rights, the report was primarily focused on the International Covenant on Civil and Political Rights, a treaty enacted by the General Assembly in 1966, to which all of the members of the “Five Eyes” alliance are signatories. The U.S. ratified the treaty in 1992, albeit with various reservations that allowed for the continuation of the death penalty and which rendered its domestic law supreme. With the exception of the U.S.’s Persian Gulf allies (Saudi Arabia, UAE and Qatar), virtually every major country has signed the treaty. Article 17 of the Covenant guarantees the right of privacy, the defining protection of which, the report explained, is “that individuals have the right to share information and ideas with one another without interference by the State, secure in the knowledge that their communication will reach and be read by the intended recipients alone.”
  • The report’s key conclusion is that this core right is impinged by mass surveillance programs: “Bulk access technology is indiscriminately corrosive of online privacy and impinges on the very essence of the right guaranteed by article 17. In the absence of a formal derogation from States’ obligations under the Covenant, these programs pose a direct and ongoing challenge to an established norm of international law.” The report recognized that protecting citizens from terrorism attacks is a vital duty of every state, and that the right of privacy is not absolute, as it can be compromised when doing so is “necessary” to serve “compelling” purposes. It noted: “There may be a compelling counter-terrorism justification for the radical re-evaluation of Internet privacy rights that these practices necessitate. ” But the report was adamant that no such justifications have ever been demonstrated by any member state using mass surveillance: “The States engaging in mass surveillance have so far failed to provide a detailed and evidence-based public justification for its necessity, and almost no States have enacted explicit domestic legislation to authorize its use.”
  • ...5 more annotations...
  • Instead, explained the Rapporteur, states have relied on vague claims whose validity cannot be assessed because of the secrecy behind which these programs are hidden: “The arguments in favor of a complete abrogation of the right to privacy on the Internet have not been made publicly by the States concerned or subjected to informed scrutiny and debate.” About the ongoing secrecy surrounding the programs, the report explained that “states deploying this technology retain a monopoly of information about its impact,” which is “a form of conceptual censorship … that precludes informed debate.” A June report from the High Commissioner for Human Rights similarly noted “the disturbing lack of governmental transparency associated with surveillance policies, laws and practices, which hinders any effort to assess their coherence with international human rights law and to ensure accountability.” The rejection of the “terrorism” justification for mass surveillance as devoid of evidence echoes virtually every other formal investigation into these programs. A federal judge last December found that the U.S. Government was unable to “cite a single case in which analysis of the NSA’s bulk metadata collection actually stopped an imminent terrorist attack.” Later that month, President Obama’s own Review Group on Intelligence and Communications Technologies concluded that mass surveillance “was not essential to preventing attacks” and information used to detect plots “could readily have been obtained in a timely manner using conventional [court] orders.”
  • That principle — that the right of internet privacy belongs to all individuals, not just Americans — was invoked by NSA whistleblower Edward Snowden when he explained in a June, 2013 interview at The Guardian why he disclosed documents showing global surveillance rather than just the surveillance of Americans: “More fundamentally, the ‘US Persons’ protection in general is a distraction from the power and danger of this system. Suspicionless surveillance does not become okay simply because it’s only victimizing 95% of the world instead of 100%.” The U.N. Rapporteur was clear that these systematic privacy violations are the result of a union between governments and tech corporations: “States increasingly rely on the private sector to facilitate digital surveillance. This is not confined to the enactment of mandatory data retention legislation. Corporates [sic] have also been directly complicit in operationalizing bulk access technology through the design of communications infrastructure that facilitates mass surveillance. ”
  • The report was most scathing in its rejection of a key argument often made by American defenders of the NSA: that mass surveillance is justified because Americans are given special protections (the requirement of a FISA court order for targeted surveillance) which non-Americans (95% of the world) do not enjoy. Not only does this scheme fail to render mass surveillance legal, but it itself constitutes a separate violation of international treaties (emphasis added): The Special Rapporteur concurs with the High Commissioner for Human Rights that where States penetrate infrastructure located outside their territorial jurisdiction, they remain bound by their obligations under the Covenant. Moreover, article 26 of the Covenant prohibits discrimination on grounds of, inter alia, nationality and citizenship. The Special Rapporteur thus considers that States are legally obliged to afford the same privacy protection for nationals and non-nationals and for those within and outside their jurisdiction. Asymmetrical privacy protection regimes are a clear violation of the requirements of the Covenant.
  • Three Democratic Senators on the Senate Intelligence Committee wrote in The New York Times that “the usefulness of the bulk collection program has been greatly exaggerated” and “we have yet to see any proof that it provides real, unique value in protecting national security.” A study by the centrist New America Foundation found that mass metadata collection “has had no discernible impact on preventing acts of terrorism” and, where plots were disrupted, “traditional law enforcement and investigative methods provided the tip or evidence to initiate the case.” It labeled the NSA’s claims to the contrary as “overblown and even misleading.” While worthless in counter-terrorism policies, the UN report warned that allowing mass surveillance to persist with no transparency creates “an ever present danger of ‘purpose creep,’ by which measures justified on counter-terrorism grounds are made available for use by public authorities for much less weighty public interest purposes.” Citing the UK as one example, the report warned that, already, “a wide range of public bodies have access to communications data, for a wide variety of purposes, often without judicial authorization or meaningful independent oversight.”
  • The latest finding adds to the growing number of international formal rulings that the mass surveillance programs of the U.S. and its partners are illegal. In January, the European parliament’s civil liberties committee condemned such programs in “the strongest possible terms.” In April, the European Court of Justice ruled that European legislation on data retention contravened EU privacy rights. A top secret memo from the GCHQ, published last year by The Guardian, explicitly stated that one key reason for concealing these programs was fear of a “damaging public debate” and specifically “legal challenges against the current regime.” The report ended with a call for far greater transparency along with new protections for privacy in the digital age. Continuation of the status quo, it warned, imposes “a risk that systematic interference with the security of digital communications will continue to proliferate without any serious consideration being given to the implications of the wholesale abandonment of the right to online privacy.” The urgency of these reforms is underscored, explained the Rapporteur, by a conclusion of the United States Privacy and Civil Liberties Oversight Board that “permitting the government to routinely collect the calling records of the entire nation fundamentally shifts the balance of power between the state and its citizens.”
Gonzalo San Gil, PhD.

Does Commissioner Oettinger Want to Discard Net Neutrality? | La Quadrature du Net - 0 views

  •  
    "Paris, 18 November 2014 - Günther Oettinger, Digital Commissioner made his first post on his blog in which he clearly introduces garanteeing internet access in rural zones as justification to give in to the demands of the Telcos to consolidate or increase their unwarranted earnings. Although the author has tried to avoid mentioning Net Neutrality, this blog post reveals his intended strategy regarding this principle" [# ! As #essential... ! ... as any of the ‪#‎HumanRights‬ # ! .... randomly picked up.]
  •  
    "Paris, 18 November 2014 - Günther Oettinger, Digital Commissioner made his first post on his blog in which he clearly introduces garanteeing internet access in rural zones as justification to give in to the demands of the Telcos to consolidate or increase their unwarranted earnings. Although the author has tried to avoid mentioning Net Neutrality, this blog post reveals his intended strategy regarding this principle"
Gonzalo San Gil, PhD.

Most Europeans Download and Stream Pirated Movies | TorrentFreak - 0 views

  •  
    " Ernesto on February 14, 2014 C: 19 Breaking Nearly 70% of all Europeans download or stream films for free, a new study from the European Commission reveals. The high costs of legal alternatives such as movie tickets and DVDs are the main justification, with release lags and limited availability also among the top reasons."
Paul Merrell

Spies and internet giants are in the same business: surveillance. But we can stop them ... - 0 views

  • On Tuesday, the European court of justice, Europe’s supreme court, lobbed a grenade into the cosy, quasi-monopolistic world of the giant American internet companies. It did so by declaring invalid a decision made by the European commission in 2000 that US companies complying with its “safe harbour privacy principles” would be allowed to transfer personal data from the EU to the US. This judgment may not strike you as a big deal. You may also think that it has nothing to do with you. Wrong on both counts, but to see why, some background might be useful. The key thing to understand is that European and American views about the protection of personal data are radically different. We Europeans are very hot on it, whereas our American friends are – how shall I put it? – more relaxed.
  • Given that personal data constitutes the fuel on which internet companies such as Google and Facebook run, this meant that their exponential growth in the US market was greatly facilitated by that country’s tolerant data-protection laws. Once these companies embarked on global expansion, however, things got stickier. It was clear that the exploitation of personal data that is the core business of these outfits would be more difficult in Europe, especially given that their cloud-computing architectures involved constantly shuttling their users’ data between server farms in different parts of the world. Since Europe is a big market and millions of its citizens wished to use Facebook et al, the European commission obligingly came up with the “safe harbour” idea, which allowed companies complying with its seven principles to process the personal data of European citizens. The circle having been thus neatly squared, Facebook and friends continued merrily on their progress towards world domination. But then in the summer of 2013, Edward Snowden broke cover and revealed what really goes on in the mysterious world of cloud computing. At which point, an Austrian Facebook user, one Maximilian Schrems, realising that some or all of the data he had entrusted to Facebook was being transferred from its Irish subsidiary to servers in the United States, lodged a complaint with the Irish data protection commissioner. Schrems argued that, in the light of the Snowden revelations, the law and practice of the United States did not offer sufficient protection against surveillance of the data transferred to that country by the government.
  • The Irish data commissioner rejected the complaint on the grounds that the European commission’s safe harbour decision meant that the US ensured an adequate level of protection of Schrems’s personal data. Schrems disagreed, the case went to the Irish high court and thence to the European court of justice. On Tuesday, the court decided that the safe harbour agreement was invalid. At which point the balloon went up. “This is,” writes Professor Lorna Woods, an expert on these matters, “a judgment with very far-reaching implications, not just for governments but for companies the business model of which is based on data flows. It reiterates the significance of data protection as a human right and underlines that protection must be at a high level.”
  • ...2 more annotations...
  • This is classic lawyerly understatement. My hunch is that if you were to visit the legal departments of many internet companies today you would find people changing their underpants at regular intervals. For the big names of the search and social media worlds this is a nightmare scenario. For those of us who take a more detached view of their activities, however, it is an encouraging development. For one thing, it provides yet another confirmation of the sterling service that Snowden has rendered to civil society. His revelations have prompted a wide-ranging reassessment of where our dependence on networking technology has taken us and stimulated some long-overdue thinking about how we might reassert some measure of democratic control over that technology. Snowden has forced us into having conversations that we needed to have. Although his revelations are primarily about government surveillance, they also indirectly highlight the symbiotic relationship between the US National Security Agency and Britain’s GCHQ on the one hand and the giant internet companies on the other. For, in the end, both the intelligence agencies and the tech companies are in the same business, namely surveillance.
  • And both groups, oddly enough, provide the same kind of justification for what they do: that their surveillance is both necessary (for national security in the case of governments, for economic viability in the case of the companies) and conducted within the law. We need to test both justifications and the great thing about the European court of justice judgment is that it starts us off on that conversation.
Gonzalo San Gil, PhD.

Comcast Dramatically Expands Unnecessary Broadband Caps -- For 'Fairness' | Techdirt - 1 views

  •  
    "from the pay-more-for-the-same-service! dept For years, we've noted how there's absolutely zero financial or technical justification for usage caps on fixed-line networks. They don't really help manage congestion, and as any incumbent ISP earnings report indicates, flat-rate broadband has proven incredibly profitable. But"
Gonzalo San Gil, PhD.

Music piracy 'funding terrorism and crime' - FT.com 2004 - 0 views

  •  
    "Gangs linked to inter-national terrorism and organised crime are relying increasingly on music piracy to fund their operations, according to music industry figures published yesterday. Leading music groups saw the value of pirated sales rise by 4 per cent to $4.5bn (€3.7bn, £2.4bn) last year, and claimed the proceeds were being used for money laundering, drugs trafficking and terrorism."
Paul Merrell

He Was a Hacker for the NSA and He Was Willing to Talk. I Was Willing to Listen. - 2 views

  • he message arrived at night and consisted of three words: “Good evening sir!” The sender was a hacker who had written a series of provocative memos at the National Security Agency. His secret memos had explained — with an earthy use of slang and emojis that was unusual for an operative of the largest eavesdropping organization in the world — how the NSA breaks into the digital accounts of people who manage computer networks, and how it tries to unmask people who use Tor to browse the web anonymously. Outlining some of the NSA’s most sensitive activities, the memos were leaked by Edward Snowden, and I had written about a few of them for The Intercept. There is no Miss Manners for exchanging pleasantries with a man the government has trained to be the digital equivalent of a Navy SEAL. Though I had initiated the contact, I was wary of how he might respond. The hacker had publicly expressed a visceral dislike for Snowden and had accused The Intercept of jeopardizing lives by publishing classified information. One of his memos outlined the ways the NSA reroutes (or “shapes”) the internet traffic of entire countries, and another memo was titled “I Hunt Sysadmins.” I felt sure he could hack anyone’s computer, including mine. Good evening sir!
  • The sender was a hacker who had written a series of provocative memos at the National Security Agency. His secret memos had explained — with an earthy use of slang and emojis that was unusual for an operative of the largest eavesdropping organization in the world — how the NSA breaks into the digital accounts of people who manage computer networks, and how it tries to unmask people who use Tor to browse the web anonymously. Outlining some of the NSA’s most sensitive activities, the memos were leaked by Edward Snowden, and I had written about a few of them for The Intercept. There is no Miss Manners for exchanging pleasantries with a man the government has trained to be the digital equivalent of a Navy SEAL. Though I had initiated the contact, I was wary of how he might respond. The hacker had publicly expressed a visceral dislike for Snowden and had accused The Intercept of jeopardizing lives by publishing classified information. One of his memos outlined the ways the NSA reroutes (or “shapes”) the internet traffic of entire countries, and another memo was titled “I Hunt Sysadmins.” I felt sure he could hack anyone’s computer, including mine.
  • I got lucky with the hacker, because he recently left the agency for the cybersecurity industry; it would be his choice to talk, not the NSA’s. Fortunately, speaking out is his second nature.
  • ...7 more annotations...
  • He agreed to a video chat that turned into a three-hour discussion sprawling from the ethics of surveillance to the downsides of home improvements and the difficulty of securing your laptop.
  • In recent years, two developments have helped make hacking for the government a lot more attractive than hacking for yourself. First, the Department of Justice has cracked down on freelance hacking, whether it be altruistic or malignant. If the DOJ doesn’t like the way you hack, you are going to jail. Meanwhile, hackers have been warmly invited to deploy their transgressive impulses in service to the homeland, because the NSA and other federal agencies have turned themselves into licensed hives of breaking into other people’s computers. For many, it’s a techno sandbox of irresistible delights, according to Gabriella Coleman, a professor at McGill University who studies hackers. “The NSA is a very exciting place for hackers because you have unlimited resources, you have some of the best talent in the world, whether it’s cryptographers or mathematicians or hackers,” she said. “It is just too intellectually exciting not to go there.”
  • The Lamb’s memos on cool ways to hunt sysadmins triggered a strong reaction when I wrote about them in 2014 with my colleague Ryan Gallagher. The memos explained how the NSA tracks down the email and Facebook accounts of systems administrators who oversee computer networks. After plundering their accounts, the NSA can impersonate the admins to get into their computer networks and pilfer the data flowing through them. As the Lamb wrote, “sys admins generally are not my end target. My end target is the extremist/terrorist or government official that happens to be using the network … who better to target than the person that already has the ‘keys to the kingdom’?” Another of his NSA memos, “Network Shaping 101,” used Yemen as a theoretical case study for secretly redirecting the entirety of a country’s internet traffic to NSA servers.
  • “If I turn the tables on you,” I asked the Lamb, “and say, OK, you’re a target for all kinds of people for all kinds of reasons. How do you feel about being a target and that kind of justification being used to justify getting all of your credentials and the keys to your kingdom?” The Lamb smiled. “There is no real safe, sacred ground on the internet,” he replied. “Whatever you do on the internet is an attack surface of some sort and is just something that you live with. Any time that I do something on the internet, yeah, that is on the back of my mind. Anyone from a script kiddie to some random hacker to some other foreign intelligence service, each with their different capabilities — what could they be doing to me?”
  • “You know, the situation is what it is,” he said. “There are protocols that were designed years ago before anybody had any care about security, because when they were developed, nobody was foreseeing that they would be taken advantage of. … A lot of people on the internet seem to approach the problem [with the attitude of] ‘I’m just going to walk naked outside of my house and hope that nobody looks at me.’ From a security perspective, is that a good way to go about thinking? No, horrible … There are good ways to be more secure on the internet. But do most people use Tor? No. Do most people use Signal? No. Do most people use insecure things that most people can hack? Yes. Is that a bash against the intelligence community that people use stuff that’s easily exploitable? That’s a hard argument for me to make.”
  • I mentioned that lots of people, including Snowden, are now working on the problem of how to make the internet more secure, yet he seemed to do the opposite at the NSA by trying to find ways to track and identify people who use Tor and other anonymizers. Would he consider working on the other side of things? He wouldn’t rule it out, he said, but dismally suggested the game was over as far as having a liberating and safe internet, because our laptops and smartphones will betray us no matter what we do with them. “There’s the old adage that the only secure computer is one that is turned off, buried in a box ten feet underground, and never turned on,” he said. “From a user perspective, someone trying to find holes by day and then just live on the internet by night, there’s the expectation [that] if somebody wants to have access to your computer bad enough, they’re going to get it. Whether that’s an intelligence agency or a cybercrimes syndicate, whoever that is, it’s probably going to happen.”
  • There are precautions one can take, and I did that with the Lamb. When we had our video chat, I used a computer that had been wiped clean of everything except its operating system and essential applications. Afterward, it was wiped clean again. My concern was that the Lamb might use the session to obtain data from or about the computer I was using; there are a lot of things he might have tried, if he was in a scheming mood. At the end of our three hours together, I mentioned to him that I had taken these precautions—and he approved. “That’s fair,” he said. “I’m glad you have that appreciation. … From a perspective of a journalist who has access to classified information, it would be remiss to think you’re not a target of foreign intelligence services.” He was telling me the U.S. government should be the least of my worries. He was trying to help me. Documents published with this article: Tracking Targets Through Proxies & Anonymizers Network Shaping 101 Shaping Diagram I Hunt Sys Admins (first published in 2014)
Paul Merrell

Study: Surveillance will cost US tech sector more than $35B by 2016 | TheHill - 0 views

  • A new study says that the U.S. tech industry is likely to lose more than $35 billion from foreign customers by 2016 because of concerns over government surveillance.“In short, foreign customers are shunning U.S. companies,” the authors of a new study from the Information Technology and Innovation Foundation write.ADVERTISEMENT“The U.S. government’s failure to reform many of the NSA’s surveillance programs has damaged the competitiveness of the U.S. tech sector and cost it a portion of the global market share,” they said.The think tank’s report found that the cost to the tech sector associated with ongoing concerns over surveillance programs run out of the U.S. was likely to “far exceed” $35 billion by 2016, an earlier estimate set by the group.
  • The group said that lawmakers must enact additional reforms to surveillance policy if they wish to help the tech sector regain the trust of foreign customers. That includes opposing “backdoors,” which allow law enforcement to access otherwise encrypted data, and signing off on trade agreements, including the controversial Trans-Pacific Partnership, that “ban digital protectionism.”The study’s authors found that the revelations about broad U.S. surveillance programs acted as a justification for foreign policymakers to enact protectionist policies aimed at aiding their own domestic technology sectors.Foreign companies have also used the information about U.S. surveillance programs to their advantage.“Some European companies have begun to highlight where their digital services are hosted as an alternative to U.S. companies,” the authors write.
  • American companies, they found, have lost contracts to foreign competitors over fears about mass surveillance.Earlier this month, President Obama signed the USA Freedom Act, a bill that reformed the three Patriot Act provisions that authorized the bulk, warrantless collection of Americans’ phone records. The bill was widely supported by technology companies, including giants like Apple and Google.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

CISPA is back! - 0 views

  • OPERATION: Fax Big Brother Congress is rushing toward a vote on CISA, the worst spying bill yet. CISA would grant sweeping legal immunity to giant companies like Facebook and Google, allowing them to do almost anything they want with your data. In exchange, they'll share even more of your personal information with the government, all in the name of "cybersecurity." CISA won't stop hackers — Congress is stuck in 1984 and doesn't understand modern technology. So this week we're sending them thousands of faxes — technology that is hopefully old enough for them to understand. Stop CISA. Send a fax now!
  • (Any tweet w/ #faxbigbrother will get faxed too!) Your email is only shown in your fax to Congress. We won't add you to any mailing lists.
  • CISA: the dirty deal between government and corporate giants. It's the dirty deal that lets much of government from the NSA to local police get your private data from your favorite websites and lets them use it without due process. The government is proposing a massive bribe—they will give corporations immunity for breaking virtually any law if they do so while providing the NSA, DHS, DEA, and local police surveillance access to everyone's data in exchange for getting away with crimes, like fraud, money laundering, or illegal wiretapping. Specifically it incentivizes companies to automatically and simultaneously transfer your data to the DHS, NSA, FBI, and local police with all of your personally-indentifying information by giving companies legal immunity (notwithstanding any law), and on top of that, you can't use the Freedom of Information Act to find out what has been shared.
  • ...1 more annotation...
  • The NSA and members of Congress want to pass a "cybersecurity" bill so badly, they’re using the recent hack of the Office of Personnel Management as justification for bringing CISA back up and rushing it through. In reality, the OPM hack just shows that the government has not been a good steward of sensitive data and they need to institute real security measures to fix their problems. The truth is that CISA could not have prevented the OPM hack, and no Senator could explain how it could have. Congress and the NSA are using irrational hysteria to turn the Internet into a place where the government has overly broad, unchecked powers. Why Faxes? Since 2012, online and civil liberties groups and 30,000+ sites have driven more than 2.6 million emails and hundreds of thousands of calls, tweets and more to Congress opposing overly broad cybersecurity legislation. Congress has tried to pass CISA in one form or another 4 times, and they were beat back every time by people like you. It's clear Congress is completely out of touch with modern technology, so this week, as Congress rushes toward a vote on CISA, we are going to send them thousands of faxes, a technology from the 1980s that is hopefully antiquated enough for them to understand. Sending a fax is super easy — you can use this page to send a fax. Any tweet with the hashtag #faxbigbrother will get turned into a fax to Congress too, so what are you waiting for? Click here to send a fax now!
Paul Merrell

Bankrolled by broadband donors, lawmakers lobby FCC on net neutrality | Ars Technica - 1 views

  • The 28 House members who lobbied the Federal Communications Commission to drop net neutrality this week have received more than twice the amount in campaign contributions from the broadband sector than the average for all House members. These lawmakers, including the top House leadership, warned the FCC that regulating broadband like a public utility "harms" providers, would be "fatal to the Internet," and could "limit economic freedom."​ According to research provided Friday by Maplight, the 28 House members received, on average, $26,832 from the "cable & satellite TV production & distribution" sector over a two-year period ending in December. According to the data, that's 2.3 times more than the House average of $11,651. What's more, one of the lawmakers who told the FCC that he had "grave concern" (PDF) about the proposed regulation took more money from that sector than any other member of the House. Rep. Greg Walden (R-OR) was the top sector recipient, netting more than $109,000 over the two-year period, the Maplight data shows.
  • Dan Newman, cofounder and president of Maplight, the California research group that reveals money in politics, said the figures show that "it's hard to take seriously politicians' claims that they are acting in the public interest when their campaigns are funded by companies seeking huge financial benefits for themselves." Signing a letter to the FCC along with Walden, who chairs the House Committee on Energy and Commerce, were three other key members of the same committee: Reps. Fred Upton (R-MI), Robert Latta (R-OH), and Marsha Blackburn (R-TN). Over the two-year period, Upton took in $65,000, Latta took $51,000, and Blackburn took $32,500. In a letter (PDF) those representatives sent to the FCC two days before Thursday's raucous FCC net neutrality hearing, the four wrote that they had "grave concern" over the FCC's consideration of "reclassifying Internet broadband service as an old-fashioned 'Title II common carrier service.'" The letter added that a switchover "harms broadband providers, the American economy, and ultimately broadband consumers, actually doing so would be fatal to the Internet as we know it."
  • Not every one of the 28 members who publicly lobbied the FCC against net neutrality in advance of Thursday's FCC public hearing received campaign financing from the industry. One representative took no money: Rep. Nick Rahall (D-WV). In all, the FCC received at least three letters from House lawmakers with 28 signatures urging caution on classifying broadband as a telecommunications service, which would open up the sector to stricter "common carrier" rules, according to letters the members made publicly available. The US has long applied common carrier status to the telephone network, providing justification for universal service obligations that guarantee affordable phone service to all Americans and other rules that promote competition and consumer choice. Some consumer advocates say that common carrier status is needed for the FCC to impose strong network neutrality rules that would force ISPs to treat all traffic equally, not degrading competing services or speeding up Web services in exchange for payment. ISPs have argued that common carrier rules would saddle them with too much regulation and would force them to spend less on network upgrades and be less innovative.
  • ...2 more annotations...
  • Of the 28 House members signing on to the three letters, Republicans received, on average, $59,812 from the industry over the two-year period compared to $13,640 for Democrats, according to the Maplight data. Another letter (PDF) sent to the FCC this week from four top members of the House, including Speaker John Boehner (R-OH), Majority Leader Eric Cantor (R-VA), Majority Whip Kevin McCarthy (R-CA), and Republican Conference Chair Cathy McMorris Rodgers (R-WA), argued in favor of cable companies: "We are writing to respectfully urge you to halt your consideration of any plan to impose antiquated regulation on the Internet, and to warn that implementation of such a plan will needlessly inhibit the creation of American private sector jobs, limit economic freedom and innovation, and threaten to derail one of our economy's most vibrant sectors," they wrote. Over the two-year period, Boehner received $75,450; Cantor got $80,800; McCarthy got $33,000; and McMorris Rodgers got $31,500.
  • The third letter (PDF) forwarded to the FCC this week was signed by 20 House members. "We respectfully urge you to consider the effect that regressing to a Title II approach might have on private companies' ability to attract capital and their continued incentives to invest and innovate, as well as the potentially negative impact on job creation that might result from any reduction in funding or investment," the letter said. Here are the 28 lawmakers who lobbied the FCC this week and their reported campaign contributions:
Gonzalo San Gil, PhD.

FBI director to citizens: Let us spy on you | Ars Technica - 0 views

  •  
    "The expanding options for communicating over the Internet and the increasing adoption of encryption technologies could leave law enforcement agents "in the dark" and unable to collect evidence against criminals, the Director of the FBI said in a speech on Thursday."
Paul Merrell

From Radio to Porn, British Spies Track Web Users' Online Identities - 1 views

  • HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the Internet.” Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media and news websites, search engines, chat forums, and blogs. The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into Internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
  • Amid a renewed push from the U.K. government for more surveillance powers, more than two dozen documents being disclosed today by The Intercept reveal for the first time several major strands of GCHQ’s existing electronic eavesdropping capabilities.
  • The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails and Internet browsing logs of Brits, Americans, and any other citizens — all without a court order or judicial warrant
  • ...17 more annotations...
  • A huge volume of the Internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis. Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events” — a term the agency uses to refer to metadata records — with about 10 billion new entries added every day. As of March 2009, the largest slice of data Black Hole held — 41 percent — was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.
  • Throughout this period, as smartphone sales started to boom, the frequency of people’s Internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data. By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it said would be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.”
  • A document from the GCHQ target analysis center (GTAC) shows the Black Hole repository’s structure.
  • The data is searched by GCHQ analysts in a hunt for behavior online that could be connected to terrorism or other criminal activity. But it has also served a broader and more controversial purpose — helping the agency hack into European companies’ computer networks. In the lead up to its secret mission targeting Netherlands-based Gemalto, the largest SIM card manufacturer in the world, GCHQ used MUTANT BROTH in an effort to identify the company’s employees so it could hack into their computers. The system helped the agency analyze intercepted Facebook cookies it believed were associated with Gemalto staff located at offices in France and Poland. GCHQ later successfully infiltrated Gemalto’s internal networks, stealing encryption keys produced by the company that protect the privacy of cell phone communications.
  • Similarly, MUTANT BROTH proved integral to GCHQ’s hack of Belgian telecommunications provider Belgacom. The agency entered IP addresses associated with Belgacom into MUTANT BROTH to uncover information about the company’s employees. Cookies associated with the IPs revealed the Google, Yahoo, and LinkedIn accounts of three Belgacom engineers, whose computers were then targeted by the agency and infected with malware. The hacking operation resulted in GCHQ gaining deep access into the most sensitive parts of Belgacom’s internal systems, granting British spies the ability to intercept communications passing through the company’s networks.
  • In March, a U.K. parliamentary committee published the findings of an 18-month review of GCHQ’s operations and called for an overhaul of the laws that regulate the spying. The committee raised concerns about the agency gathering what it described as “bulk personal datasets” being held about “a wide range of people.” However, it censored the section of the report describing what these “datasets” contained, despite acknowledging that they “may be highly intrusive.” The Snowden documents shine light on some of the core GCHQ bulk data-gathering programs that the committee was likely referring to — pulling back the veil of secrecy that has shielded some of the agency’s most controversial surveillance operations from public scrutiny. KARMA POLICE and MUTANT BROTH are among the key bulk collection systems. But they do not operate in isolation — and the scope of GCHQ’s spying extends far beyond them.
  • The agency operates a bewildering array of other eavesdropping systems, each serving its own specific purpose and designated a unique code name, such as: SOCIAL ANTHROPOID, which is used to analyze metadata on emails, instant messenger chats, social media connections and conversations, plus “telephony” metadata about phone calls, cell phone locations, text and multimedia messages; MEMORY HOLE, which logs queries entered into search engines and associates each search with an IP address; MARBLED GECKO, which sifts through details about searches people have entered into Google Maps and Google Earth; and INFINITE MONKEYS, which analyzes data about the usage of online bulletin boards and forums. GCHQ has other programs that it uses to analyze the content of intercepted communications, such as the full written body of emails and the audio of phone calls. One of the most important content collection capabilities is TEMPORA, which mines vast amounts of emails, instant messages, voice calls and other communications and makes them accessible through a Google-style search tool named XKEYSCORE.
  • As of September 2012, TEMPORA was collecting “more than 40 billion pieces of content a day” and it was being used to spy on people across Europe, the Middle East, and North Africa, according to a top-secret memo outlining the scope of the program. The existence of TEMPORA was first revealed by The Guardian in June 2013. To analyze all of the communications it intercepts and to build a profile of the individuals it is monitoring, GCHQ uses a variety of different tools that can pull together all of the relevant information and make it accessible through a single interface. SAMUEL PEPYS is one such tool, built by the British spies to analyze both the content and metadata of emails, browsing sessions, and instant messages as they are being intercepted in real time. One screenshot of SAMUEL PEPYS in action shows the agency using it to monitor an individual in Sweden who visited a page about GCHQ on the U.S.-based anti-secrecy website Cryptome.
  • Partly due to the U.K.’s geographic location — situated between the United States and the western edge of continental Europe — a large amount of the world’s Internet traffic passes through its territory across international data cables. In 2010, GCHQ noted that what amounted to “25 percent of all Internet traffic” was transiting the U.K. through some 1,600 different cables. The agency said that it could “survey the majority of the 1,600” and “select the most valuable to switch into our processing systems.”
  • According to Joss Wright, a research fellow at the University of Oxford’s Internet Institute, tapping into the cables allows GCHQ to monitor a large portion of foreign communications. But the cables also transport masses of wholly domestic British emails and online chats, because when anyone in the U.K. sends an email or visits a website, their computer will routinely send and receive data from servers that are located overseas. “I could send a message from my computer here [in England] to my wife’s computer in the next room and on its way it could go through the U.S., France, and other countries,” Wright says. “That’s just the way the Internet is designed.” In other words, Wright adds, that means “a lot” of British data and communications transit across international cables daily, and are liable to be swept into GCHQ’s databases.
  • A map from a classified GCHQ presentation about intercepting communications from undersea cables. GCHQ is authorized to conduct dragnet surveillance of the international data cables through so-called external warrants that are signed off by a government minister. The external warrants permit the agency to monitor communications in foreign countries as well as British citizens’ international calls and emails — for example, a call from Islamabad to London. They prohibit GCHQ from reading or listening to the content of “internal” U.K. to U.K. emails and phone calls, which are supposed to be filtered out from GCHQ’s systems if they are inadvertently intercepted unless additional authorization is granted to scrutinize them. However, the same rules do not apply to metadata. A little-known loophole in the law allows GCHQ to use external warrants to collect and analyze bulk metadata about the emails, phone calls, and Internet browsing activities of British people, citizens of closely allied countries, and others, regardless of whether the data is derived from domestic U.K. to U.K. communications and browsing sessions or otherwise. In March, the existence of this loophole was quietly acknowledged by the U.K. parliamentary committee’s surveillance review, which stated in a section of its report that “special protection and additional safeguards” did not apply to metadata swept up using external warrants and that domestic British metadata could therefore be lawfully “returned as a result of searches” conducted by GCHQ.
  • Perhaps unsurprisingly, GCHQ appears to have readily exploited this obscure legal technicality. Secret policy guidance papers issued to the agency’s analysts instruct them that they can sift through huge troves of indiscriminately collected metadata records to spy on anyone regardless of their nationality. The guidance makes clear that there is no exemption or extra privacy protection for British people or citizens from countries that are members of the Five Eyes, a surveillance alliance that the U.K. is part of alongside the U.S., Canada, Australia, and New Zealand. “If you are searching a purely Events only database such as MUTANT BROTH, the issue of location does not occur,” states one internal GCHQ policy document, which is marked with a “last modified” date of July 2012. The document adds that analysts are free to search the databases for British metadata “without further authorization” by inputing a U.K. “selector,” meaning a unique identifier such as a person’s email or IP address, username, or phone number. Authorization is “not needed for individuals in the U.K.,” another GCHQ document explains, because metadata has been judged “less intrusive than communications content.” All the spies are required to do to mine the metadata troves is write a short “justification” or “reason” for each search they conduct and then click a button on their computer screen.
  • Intelligence GCHQ collects on British persons of interest is shared with domestic security agency MI5, which usually takes the lead on spying operations within the U.K. MI5 conducts its own extensive domestic surveillance as part of a program called DIGINT (digital intelligence).
  • GCHQ’s documents suggest that it typically retains metadata for periods of between 30 days to six months. It stores the content of communications for a shorter period of time, varying between three to 30 days. The retention periods can be extended if deemed necessary for “cyber defense.” One secret policy paper dated from January 2010 lists the wide range of information the agency classes as metadata — including location data that could be used to track your movements, your email, instant messenger, and social networking “buddy lists,” logs showing who you have communicated with by phone or email, the passwords you use to access “communications services” (such as an email account), and information about websites you have viewed.
  • Records showing the full website addresses you have visited — for instance, www.gchq.gov.uk/what_we_do — are treated as content. But the first part of an address you have visited — for instance, www.gchq.gov.uk — is treated as metadata. In isolation, a single metadata record of a phone call, email, or website visit may not reveal much about a person’s private life, according to Ethan Zuckerman, director of Massachusetts Institute of Technology’s Center for Civic Media. But if accumulated and analyzed over a period of weeks or months, these details would be “extremely personal,” he told The Intercept, because they could reveal a person’s movements, habits, religious beliefs, political views, relationships, and even sexual preferences. For Zuckerman, who has studied the social and political ramifications of surveillance, the most concerning aspect of large-scale government data collection is that it can be “corrosive towards democracy” — leading to a chilling effect on freedom of expression and communication. “Once we know there’s a reasonable chance that we are being watched in one fashion or another it’s hard for that not to have a ‘panopticon effect,’” he said, “where we think and behave differently based on the assumption that people may be watching and paying attention to what we are doing.”
  • When compared to surveillance rules in place in the U.S., GCHQ notes in one document that the U.K. has “a light oversight regime.” The more lax British spying regulations are reflected in secret internal rules that highlight greater restrictions on how NSA databases can be accessed. The NSA’s troves can be searched for data on British citizens, one document states, but they cannot be mined for information about Americans or other citizens from countries in the Five Eyes alliance. No such constraints are placed on GCHQ’s own databases, which can be sifted for records on the phone calls, emails, and Internet usage of Brits, Americans, and citizens from any other country. The scope of GCHQ’s surveillance powers explain in part why Snowden told The Guardian in June 2013 that U.K. surveillance is “worse than the U.S.” In an interview with Der Spiegel in July 2013, Snowden added that British Internet cables were “radioactive” and joked: “Even the Queen’s selfies to the pool boy get logged.”
  • In recent years, the biggest barrier to GCHQ’s mass collection of data does not appear to have come in the form of legal or policy restrictions. Rather, it is the increased use of encryption technology that protects the privacy of communications that has posed the biggest potential hindrance to the agency’s activities. “The spread of encryption … threatens our ability to do effective target discovery/development,” says a top-secret report co-authored by an official from the British agency and an NSA employee in 2011. “Pertinent metadata events will be locked within the encrypted channels and difficult, if not impossible, to prise out,” the report says, adding that the agencies were working on a plan that would “(hopefully) allow our Internet Exploitation strategy to prevail.”
Paul Merrell

Theresa May to create new internet that would be controlled and regulated by government... - 1 views

  • Theresa May is planning to introduce huge regulations on the way the internet works, allowing the government to decide what is said online. Particular focus has been drawn to the end of the manifesto, which makes clear that the Tories want to introduce huge changes to the way the internet works. "Some people say that it is not for government to regulate when it comes to technology and the internet," it states. "We disagree." Senior Tories confirmed to BuzzFeed News that the phrasing indicates that the government intends to introduce huge restrictions on what people can post, share and publish online. The plans will allow Britain to become "the global leader in the regulation of the use of personal data and the internet", the manifesto claims. It comes just soon after the Investigatory Powers Act came into law. That legislation allowed the government to force internet companies to keep records on their customers' browsing histories, as well as giving ministers the power to break apps like WhatsApp so that messages can be read. The manifesto makes reference to those increased powers, saying that the government will work even harder to ensure there is no "safe space for terrorists to be able to communicate online". That is apparently a reference in part to its work to encourage technology companies to build backdoors into their encrypted messaging services – which gives the government the ability to read terrorists' messages, but also weakens the security of everyone else's messages, technology companies have warned.
  • The government now appears to be launching a similarly radical change in the way that social networks and internet companies work. While much of the internet is currently controlled by private businesses like Google and Facebook, Theresa May intends to allow government to decide what is and isn't published, the manifesto suggests. The new rules would include laws that make it harder than ever to access pornographic and other websites. The government will be able to place restrictions on seeing adult content and any exceptions would have to be justified to ministers, the manifesto suggests. The manifesto even suggests that the government might stop search engines like Google from directing people to pornographic websites. "We will put a responsibility on industry not to direct users – even unintentionally – to hate speech, pornography, or other sources of harm," the Conservatives write.
  • The laws would also force technology companies to delete anything that a person posted when they were under 18. But perhaps most unusually they would be forced to help controversial government schemes like its Prevent strategy, by promoting counter-extremist narratives. "In harnessing the digital revolution, we must take steps to protect the vulnerable and give people confidence to use the internet without fear of abuse, criminality or exposure to horrific content", the manifesto claims in a section called 'the safest place to be online'. The plans are in keeping with the Tories' commitment that the online world must be regulated as strongly as the offline one, and that the same rules should apply in both. "Our starting point is that online rules should reflect those that govern our lives offline," the Conservatives' manifesto says, explaining this justification for a new level of regulation. "It should be as unacceptable to bully online as it is in the playground, as difficult to groom a young child on the internet as it is in a community, as hard for children to access violent and degrading pornography online as it is in the high street, and as difficult to commit a crime digitally as it is physically."
  • ...2 more annotations...
  • The manifesto also proposes that internet companies will have to pay a levy, like the one currently paid by gambling firms. Just like with gambling, that money will be used to pay for advertising schemes to tell people about the dangers of the internet, in particular being used to "support awareness and preventative activity to counter internet harms", according to the manifesto. The Conservatives will also seek to regulate the kind of news that is posted online and how companies are paid for it. If elected, Theresa May will "take steps to protect the reliability and objectivity of information that is essential to our democracy" – and crack down on Facebook and Google to ensure that news companies get enough advertising money. If internet companies refuse to comply with the rulings – a suggestion that some have already made about the powers in the Investigatory Powers Act – then there will be a strict and strong set of ways to punish them. "We will introduce a sanctions regime to ensure compliance, giving regulators the ability to fine or prosecute those companies that fail in their legal duties, and to order the removal of content where it clearly breaches UK law," the manifesto reads. In laying out its plan for increased regulation, the Tories anticipate and reject potential criticism that such rules could put people at risk.
  • "While we cannot create this framework alone, it is for government, not private companies, to protect the security of people and ensure the fairness of the rules by which people and businesses abide," the document reads. "Nor do we agree that the risks of such an approach outweigh the potential benefits."
Paul Merrell

Nearly Everyone In The U.S. And Canada Just Had Their Private Cell Phone Location Data ... - 0 views

  • A company by the name of LocationSmart isn't having a particularly good month. The company recently received all the wrong kind of attention when it was caught up in a privacy scandal involving the nation's wireless carriers and our biggest prison phone monopoly. Like countless other companies and governments, LocationSmart buys your wireless location data from cell carriers. It then sells access to that data via a portal that can provide real-time access to a user's location via a tailored graphical interface using just the target's phone number.
  • Theoretically, this functionality is sold under the pretense that the tool can be used to track things like drug offenders who have skipped out of rehab. And ideally, all the companies involved were supposed to ensure that data lookup requests were accompanied by something vaguely resembling official documentation. But a recent deep dive by the New York Times noted how the system was open to routine abuse by law enforcement, after a Missouri Sherrif used the system to routinely spy on Judges and fellow law enforcement officers without much legitimate justification (or pesky warrants): "The service can find the whereabouts of almost any cellphone in the country within seconds. It does this by going through a system typically used by marketers and other companies to get location data from major cellphone carriers, including AT&T, Sprint, T-Mobile and Verizon, documents show. Between 2014 and 2017, the sheriff, Cory Hutcheson, used the service at least 11 times, prosecutors said. His alleged targets included a judge and members of the State Highway Patrol. Mr. Hutcheson, who was dismissed last year in an unrelated matter, has pleaded not guilty in the surveillance cases." It was yet another example of the way nonexistent to lax consumer privacy laws in the States (especially for wireless carriers) routinely come back to bite us. But then things got worse.
  • Driven by curiousity in the wake of the Times report, a PhD student at Carnegie Mellon University by the name of Robert Xiao discovered that the "try before you buy" system used by LocationSmart to advertise the cell location tracking system contained a bug, A bug so bad that it exposed the data of roughly 200 million wireless subscribers across the United States and Canada (read: nearly everybody). As we see all too often, the researcher highlighted how the security standards in place to safeguard this data were virtually nonexistent: "Due to a very elementary bug in the website, you can just skip that consent part and go straight to the location," said Robert Xiao, a PhD student at the Human-Computer Interaction Institute at Carnegie Mellon University, in a phone call. "The implication of this is that LocationSmart never required consent in the first place," he said. "There seems to be no security oversight here."
  • ...1 more annotation...
  • Meanwhile, none of the four major wireless carriers have been willing to confirm any business relationship with LocationSmart, but all claim to be investigating the problem after the week of bad press. That this actually results in substantive changes to the nation's cavalier treatment of private user data is a wager few would be likely to make.
1 - 15 of 15
Showing 20 items per page