Skip to main content

Home/ Future of the Web/ Group items tagged flow

Rss Feed Group items tagged

Paul Merrell

Profiled From Radio to Porn, British Spies Track Web Users' Online Identities | Global ... - 0 views

  • One system builds profiles showing people’s web browsing histories. Another analyzes instant messenger communications, emails, Skype calls, text messages, cell phone locations, and social media interactions. Separate programs were built to keep tabs on “suspicious” Google searches and usage of Google Maps. The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails and Internet browsing logs of Brits, Americans, and any other citizens  all without a court order or judicial warrant.
  • The power of KARMA POLICE was illustrated in 2009, when GCHQ launched a top-secret operation to collect intelligence about people using the Internet to listen to radio shows. The agency used a sample of nearly 7 million metadata records, gathered over a period of three months, to observe the listening habits of more than 200,000 people across 185 countries, including the U.S., the U.K., Ireland, Canada, Mexico, Spain, the Netherlands, France, and Germany.
  • GCHQ’s documents indicate that the plans for KARMA POLICE were drawn up between 2007 and 2008. The system was designed to provide the agency with “either (a) a web browsing profile for every visible user on the Internet, or (b) a user profile for every visible website on the Internet.” The origin of the surveillance system’s name is not discussed in the documents. But KARMA POLICE is also the name of a popular song released in 1997 by the Grammy Award-winning British band Radiohead, suggesting the spies may have been fans. A verse repeated throughout the hit song includes the lyric, “This is what you’ll get, when you mess with us.”
  • ...3 more annotations...
  • GCHQ vacuums up the website browsing histories using “probes” that tap into the international fiber-optic cables that transport Internet traffic across the world. A huge volume of the Internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis. Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events”  a term the agency uses to refer to metadata records  with about 10 billion new entries added every day. As of March 2009, the largest slice of data Black Hole held  41 percent  was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.
  • Throughout this period, as smartphone sales started to boom, the frequency of people’s Internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data. By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it saidwould be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.” HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the Internet.” Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media and news websites, search engines, chat forums, and blogs.
  • The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into Internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

NSA Spying Relies on AT&T's 'Extreme Willingness to Help' - ProPublica - 0 views

  • he National Security Agency’s ability to spy on vast quantities of Internet traffic passing through the United States has relied on its extraordinary, decades-long partnership with a single company: the telecom giant AT&T. While it has been long known that American telecommunications companies worked closely with the spy agency, newly disclosed NSA documents show that the relationship with AT&T has been considered unique and especially productive. One document described it as “highly collaborative,” while another lauded the company’s “extreme willingness to help.”
  • AT&T’s cooperation has involved a broad range of classified activities, according to the documents, which date from 2003 to 2013. AT&T has given the NSA access, through several methods covered under different legal rules, to billions of emails as they have flowed across its domestic networks. It provided technical assistance in carrying out a secret court order permitting the wiretapping of all Internet communications at the United Nations headquarters, a customer of AT&T. The NSA’s top-secret budget in 2013 for the AT&T partnership was more than twice that of the next-largest such program, according to the documents. The company installed surveillance equipment in at least 17 of its Internet hubs on American soil, far more than its similarly sized competitor, Verizon. And its engineers were the first to try out new surveillance technologies invented by the eavesdropping agency. One document reminds NSA officials to be polite when visiting AT&T facilities, noting: “This is a partnership, not a contractual relationship.” The documents, provided by the former agency contractor Edward Snowden, were jointly reviewed by The New York Times and ProPublica.
  • It is not clear if the programs still operate in the same way today. Since the Snowden revelations set off a global debate over surveillance two years ago, some Silicon Valley technology companies have expressed anger at what they characterize as NSA intrusions and have rolled out new encryption to thwart them. The telecommunications companies have been quieter, though Verizon unsuccessfully challenged a court order for bulk phone records in 2014. At the same time, the government has been fighting in court to keep the identities of its telecom partners hidden. In a recent case, a group of AT&T customers claimed that the NSA’s tapping of the Internet violated the Fourth Amendment protection against unreasonable searches. This year, a federal judge dismissed key portions of the lawsuit after the Obama administration argued that public discussion of its telecom surveillance efforts would reveal state secrets, damaging national security.
Paul Merrell

Spies and internet giants are in the same business: surveillance. But we can stop them ... - 0 views

  • On Tuesday, the European court of justice, Europe’s supreme court, lobbed a grenade into the cosy, quasi-monopolistic world of the giant American internet companies. It did so by declaring invalid a decision made by the European commission in 2000 that US companies complying with its “safe harbour privacy principles” would be allowed to transfer personal data from the EU to the US. This judgment may not strike you as a big deal. You may also think that it has nothing to do with you. Wrong on both counts, but to see why, some background might be useful. The key thing to understand is that European and American views about the protection of personal data are radically different. We Europeans are very hot on it, whereas our American friends are – how shall I put it? – more relaxed.
  • Given that personal data constitutes the fuel on which internet companies such as Google and Facebook run, this meant that their exponential growth in the US market was greatly facilitated by that country’s tolerant data-protection laws. Once these companies embarked on global expansion, however, things got stickier. It was clear that the exploitation of personal data that is the core business of these outfits would be more difficult in Europe, especially given that their cloud-computing architectures involved constantly shuttling their users’ data between server farms in different parts of the world. Since Europe is a big market and millions of its citizens wished to use Facebook et al, the European commission obligingly came up with the “safe harbour” idea, which allowed companies complying with its seven principles to process the personal data of European citizens. The circle having been thus neatly squared, Facebook and friends continued merrily on their progress towards world domination. But then in the summer of 2013, Edward Snowden broke cover and revealed what really goes on in the mysterious world of cloud computing. At which point, an Austrian Facebook user, one Maximilian Schrems, realising that some or all of the data he had entrusted to Facebook was being transferred from its Irish subsidiary to servers in the United States, lodged a complaint with the Irish data protection commissioner. Schrems argued that, in the light of the Snowden revelations, the law and practice of the United States did not offer sufficient protection against surveillance of the data transferred to that country by the government.
  • The Irish data commissioner rejected the complaint on the grounds that the European commission’s safe harbour decision meant that the US ensured an adequate level of protection of Schrems’s personal data. Schrems disagreed, the case went to the Irish high court and thence to the European court of justice. On Tuesday, the court decided that the safe harbour agreement was invalid. At which point the balloon went up. “This is,” writes Professor Lorna Woods, an expert on these matters, “a judgment with very far-reaching implications, not just for governments but for companies the business model of which is based on data flows. It reiterates the significance of data protection as a human right and underlines that protection must be at a high level.”
  • ...2 more annotations...
  • This is classic lawyerly understatement. My hunch is that if you were to visit the legal departments of many internet companies today you would find people changing their underpants at regular intervals. For the big names of the search and social media worlds this is a nightmare scenario. For those of us who take a more detached view of their activities, however, it is an encouraging development. For one thing, it provides yet another confirmation of the sterling service that Snowden has rendered to civil society. His revelations have prompted a wide-ranging reassessment of where our dependence on networking technology has taken us and stimulated some long-overdue thinking about how we might reassert some measure of democratic control over that technology. Snowden has forced us into having conversations that we needed to have. Although his revelations are primarily about government surveillance, they also indirectly highlight the symbiotic relationship between the US National Security Agency and Britain’s GCHQ on the one hand and the giant internet companies on the other. For, in the end, both the intelligence agencies and the tech companies are in the same business, namely surveillance.
  • And both groups, oddly enough, provide the same kind of justification for what they do: that their surveillance is both necessary (for national security in the case of governments, for economic viability in the case of the companies) and conducted within the law. We need to test both justifications and the great thing about the European court of justice judgment is that it starts us off on that conversation.
Gonzalo San Gil, PhD.

Public Media Joins Forces for One Big Platform - 0 views

  •  
    Power for The Public Services... For The Public Media... For Every@ne's Voice.
  •  
    NEW YORK - The country's five silos of public radio and television are spilling into each other with a joint program that will allow them - and eventually the public itself - to build apps, stations, websites and other media services combining audio, text and video content from every public radio and television outlet in the country. NPR president and CEO Vivian Schiller appeared at Wired's Disruptive by Design conference Monday morning to announce the new Public Media Platform, a partnership between American Public Media, National Public Radio, Public Broadcasting Services (PBS), Public Radio International and the Public Radio Exchange distribution network. The Public Media Platform is "a series of platforms that will allow all of the content from all of those entities - whether news or cultural products - to flow freely among the partners and member stations, and ultimately, also to other publishers, other not-for-profits and software developers who will invent wonderful new products that we can't even imagine," said Schiller.
  •  
    This strikes me at first blush as a potentially disruptive move by public radio and television stations and networks, somewhat akin to the disruptive free and open source software movement. I.e., because the content is free and will apparently be freely available to the public for recycling, we may see the emergence of a viral free meta-network of the kind that content providers stuck behind paywalls can't imitate. Potentially a significant content commons counter-balancing force to paywall content providers. The devil is in the details and implementation, of course.
Gary Edwards

Word 2007 XAML Generator - Home - 0 views

  •  
    The OOXML <> XAML "fixed/flow" converter firs tappeared in the December 2007 MSOffice beta SDK. Now it's an easy to install MSOffice plug-in. So, where's that port of XUL to WebKit?
  •  
    Project Description A Word 2007 Add-in that converts the Office Open XML (WordprocessingML) to XAML: For WPF, the document is converted into a FlowDocument element. For Silverlight 2 the document is converted into a StackPanel element containing TextBlock elements.
Gary Edwards

The real reason Google is making Chrome | Computerworld Blogs - 0 views

  •  
    Good analysis by Stephen Vaughan-Nichols. He gets it right. Sort of. Stephen believes that Chrome is desinged to kill MSOffice. Maybe, but i think it's way too late for that. IMHO, Chrome is designed to keep Google and the Open Web in the game. A game that Microsoft is likely to run away with. Microsoft has built an easy to use transiton bridge form MSOffice desktop centric "client/server" computing model to a Web centirc but proprietary RiA-WebStack-Cloud model. In short, there is an on going great transtion of traditional client/server apps to an emerging model we might call client/ WebStack-Cloud-RiA /server computing model. As the world shifts from a Web document model to one driven by Web Applications, there is i believe a complimentary shift towards the advantage Micorsoft holds via the desktop "client/server" monopoly. For Microsoft, this is just a transtion. Painful from a monopolist profitability view point - but unavoidably necessary. The transition is no doubt helped by the OOXML <> XAML "Fixed/flow" Silverlight ready conversion component. MS also has a WebStack-Cloud (Mesh) story that has become an unstoppable juggernaut (Exchange/SharePoint/SQL Server as the WebSTack). WebKit based RiA challengers like Adobe Apollo, Google Chrome, and Apple SproutCore-Cocoa have to figure out how to crack into the great transition. MS has succeeded in protecting their MSOffice monopoly until such time as they had all the transtion pieces in place. They have a decided advantage here. It's also painfully obvious that the while the WebKit guys have incredible innovation on their side, they are still years behind the complete desktop to WebStack-RiA-Cloud to device to legacy servers application story Microsoft is now selling into the marketplace. They also are seriously lacking in developer tools. Still, the future of the Open Web hangs in the balance. Rather than trying to kill MSOffice, i would think a better approach would be that of trying to
  •  
    There are five reasons why Google is doing this, and, if you read the comic book closely - yes, I'm serious - and you know technology you can see the reasons for yourself. These, in turn, lead to what I think is Google's real goal for Chrome.
  •  
    I'm still keeping the door open on a suspicion that Microsoft may have planned to end the life of MS Office after the new fortress on the server side is ready. The code base is simply too brittle to have a competitive future in the feature wars. I can't get past my belief that if Microsoft saw any future in the traditional client-side office suite, it would have been building a new one a decade ago. Too many serious bugs too deeply buried in spaghetti code to fix; it's far easier to rebuild from the ground up. Word dates to 1984, Excel to 1985, Powerpoint to 1987, All were developed for the Mac, ported years later to Windows. At least Word is still running a deeply flawed 16-bit page layout engine. E.g., page breaks across subdocuments have been broken since Word 1.0. Technology designed to replace yet still largely defined by its predecessor, the IBM Correcting Selectric electro-mechanical typewriter. Mid-80s stand-alone, non-networked computer technology in the World Wide Web era? Where's the future in software architecture developed two decades ago, before the Connected World? I suspect Office's end is near. Microsoft's problem is migrating their locked-in customers to the new fortress on the server side. The bridge is OOXML. In other words, Google doesn't have to kill Office; Microsoft will do that itself. Giving the old cash cow a face lift and fresh coat of lipstick? That's the surest sign that the old cow's owner is keeping a close eye on prices in the commodity hamburger market while squeezing out the last few buckets of milk.
Paul Merrell

Dare Obasanjo aka Carnage4Life - Not Turtles, AtomPub All the Way Down - 0 views

  • I don't think the Atom publishing protocol can be considered the universal protocol for talking to remote databases given that cloud storage vendors like Amazon and database vendors like Oracle don't support it yet. That said, this is definitely a positive trend. Back in the RSS vs. Atom days I used to get frustrated that people were spending so much time reinventing the wheel with an RSS clone when the real gaping hole in the infrastructure was a standard editing protocol. It took a little longer than I expected (Sam Ruby started talking about in 2003) but the effort has succeeded way beyond my wildest dreams. All I wanted was a standard editing protocol for blogs and content management systems and we've gotten so much more.
  • Microsoft is using AtomPub as the interface to a wide breadth of services and products as George Moore points out in his post A Unified Standards-Based Protocols and Tooling Platform for Storage from Microsoft&nbsp;
  • And a few weeks after George's post even more was revealed in posts such as this one about&nbsp; FeedSync and Live Mesh where we find out Congratulations to the Live Mesh team, who announced their Live Mesh Technology Preview release earlier this evening! Amit Mital gives a detailed overview in this post on http://dev.live.com. You can read all about it in the usual places...so why do I mention it here? FeedSync is one of the core parts of the Live Mesh platform. One of the key values of Live Mesh is that your data flows to all of your devices. And rather than being hidden away in a single service, any properly authenticated user has full bidirectional sync capability. As I discussed in the Introduction to FeedSync, this really makes "your stuff yours". Okay, FeedSync isn't really AtomPub but it does use the Atom syndication format so I count that as a win for Atom+APP as well. As time goes on, I hope we'll see even more products and services that support Atom and AtomPub from Microsoft. Standardization at the protocol layer means we can move innovation up the stack.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Paul Merrell

LEAKED: Secret Negotiations to Let Big Brother Go Global | Wolf Street - 0 views

  • Much has been written, at least in the alternative media,&nbsp;about the Trans Pacific Partnership (TPP) and the Transatlantic Trade and Investment Partnership (TTIP), two multilateral trade treaties being negotiated between the representatives of dozens of national governments and armies of corporate lawyers and lobbyists (on which you can read more here, here and here). However, much less is known about the decidedly more secretive Trade in Services Act (TiSA), which involves more countries than either of the other two. At least until now, that is. Thanks to a leaked document jointly published by the Associated Whistleblowing Press and Filtrala, the potential ramifications of the treaty being hashed out behind hermetically sealed doors in Geneva are finally seeping out into the public arena.
  • If signed, the treaty would affect all services ranging from electronic transactions and data flow, to veterinary and architecture services. It would almost certainly open the floodgates to the final wave of privatization of public services, including the provision of healthcare, education and water. Meanwhile, already privatized companies would be prevented from a re-transfer to the public sector by a so-called barring “ratchet clause” – even if the privatization failed. More worrisome still, the proposal stipulates that no participating state can stop the use, storage and exchange of personal data relating to their territorial base. Here’s more from Rosa Pavanelli, general secretary of Public Services International (PSI):
  • The leaked documents confirm our worst fears that TiSA is being used to further the interests of some of the largest corporations on earth (…) Negotiation of unrestricted data movement, internet neutrality and how electronic signatures can be used strike at the heart of individuals’ rights. Governments must come clean about what they are negotiating in these secret trade deals. Fat chance of that, especially in light of the fact that the text is designed to be almost impossible to repeal, and is to be “considered confidential” for five years after being signed. What that effectively means is that the U.S. approach to data protection (read: virtually non-existent) could very soon become the norm across 50 countries spanning the breadth and depth of the industrial world.
  • ...1 more annotation...
  • The main players in the top-secret negotiations are the United States and all 28 members of the European Union. However, the broad scope of the treaty also includes Australia, Canada, Chile, Colombia, Costa Rica, Hong Kong, Iceland, Israel, Japan, Liechtenstein, Mexico, New Zealand, Norway, Pakistan, Panama, Paraguay, Peru, South Korea, Switzerland, Taiwan and Turkey. Combined they represent almost 70 percent of all trade in services worldwide. An explicit goal of the TiSA negotiations is to overcome the exceptions in GATS that protect certain non-tariff trade barriers, such as data protection. For example, the draft Financial Services Annex of TiSA, published by Wikileaks in June 2014, would allow financial institutions, such as banks, the free transfer of data, including personal data, from one country to another. As Ralf Bendrath, a senior policy advisor to the MEP Jan Philipp Albrecht, writes in State Watch, this would constitute a radical carve-out from current European data protection rules:
Gonzalo San Gil, PhD.

Why Kim Dotcom hasn't been extradited 3 years after the US smashed Megaupload | Ars Tec... - 0 views

  •  
    "Why Kim Dotcom hasn't been extradited 3 years after the US smashed Megaupload An extradition hearing is set for June 2015. Based on history, don't hold your breath. by Cyrus Farivar - Jan 18, 2015 10:30 pm UTC Share Tweet 37 Kim Dotcom made his initial play for the Billboard charts in late 2011. Kim Dotcom has never been shy. And in December 2011, roughly a month before things for Dotcom were set to drastically change, he still oozed with bravado: Dotcom released a song ("The Megaupload Song") in conjunction with producer Printz Board. It featured a number of major pop stars-including the likes of Kanye West, Jamie Foxx, and Serena Williams-all singing that they "love Megaupload."" [ # ! this is not an #IntellectualProperty #enforcement #issue, # ! it is a bunch of #governments (& friend companies) saying # ! #citizens that #information & #culture belong to '#Them'... # ! ;/ # ! ... and this doesn't work this way. # ! :) ]
  •  
    " Why Kim Dotcom hasn't been extradited 3 years after the US smashed Megaupload An extradition hearing is set for June 2015. Based on history, don't hold your breath. by Cyrus Farivar - Jan 18, 2015 10:30 pm UTC Share Tweet 37 Kim Dotcom made his initial play for the Billboard charts in late 2011. Kim Dotcom has never been shy. And in December 2011, roughly a month before things for Dotcom were set to drastically change, he still oozed with bravado: Dotcom released a song ("The Megaupload Song") in conjunction with producer Printz Board. It featured a number of major pop stars-including the likes of Kanye West, Jamie Foxx, and Serena Williams-all singing that they "love Megaupload.""
Paul Merrell

Remaining Snowden docs will be released to avert 'unspecified US war' - ‪Cryp... - 1 views

  • All the remaining Snowden documents will be released next month, according t‪o‬ whistle-blowing site ‪Cryptome, which said in a tweet that the release of the info by unnamed third parties would be necessary to head off an unnamed "war".‬‪Cryptome‬ said it would "aid and abet" the release of "57K to 1.7M" new documents that had been "withheld for national security-public debate [sic]". &lt;a href="http://pubads.g.doubleclick.net/gampad/jump?iu=/6978/reg_security/front&amp;amp;sz=300x250%7C300x600&amp;amp;tile=3&amp;amp;c=33U7RchawQrMoAAHIac14AAAKH&amp;amp;t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" target="_blank"&gt; &lt;img src="http://pubads.g.doubleclick.net/gampad/ad?iu=/6978/reg_security/front&amp;amp;sz=300x250%7C300x600&amp;amp;tile=3&amp;amp;c=33U7RchawQrMoAAHIac14AAAKH&amp;amp;t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" alt=""&gt;&lt;/a&gt; The site clarified that will not be publishing the documents itself.Transparency activists would welcome such a release but such a move would be heavily criticised by inteligence agencies and military officials, who argue that Snowden's dump of secret documents has set US and allied (especially British) intelligence efforts back by years.
  • As things stand, the flow of Snowden disclosures is controlled by those who have access to the Sn‪o‬wden archive, which might possibly include Snowden confidants such as Glenn Greenwald and Laura Poitras. In some cases, even when these people release information to mainstream media organisations, it is then suppressed by these organisations after negotiation with the authorities. (In one such case, some key facts were later revealed by the Register.)"July is when war begins unless headed off by Snowden full release of crippling intel. After war begins not a chance of release," Cryptome tweeted on its official feed."Warmongerers are on a rampage. So, yes, citizens holding Snowden docs will do the right thing," it said.
  • "For more on Snowden docs release in July watch for Ellsberg, special guest and others at HOPE, July 18-20: http://www.hope.net/schedule.html," it added.HOPE (Hackers On Planet Earth) is a well-regarded and long-running hacking conference organised by 2600 magazine. Previous speakers at the event have included Kevin Mitnick, Steve Wozniak and Jello Biafra.In other developments, ‪Cryptome‬ has started a Kickstarter fund to release its entire archive in the form of a USB stick archive. It wants t‪o‬ raise $100,000 to help it achieve its goal. More than $14,000 has already been raised.The funding drive follows a dispute between ‪Cryptome‬ and its host Network Solutions, which is owned by web.com. Access to the site was bl‪o‬cked f‪o‬ll‪o‬wing a malware infection last week. ‪Cryptome‬ f‪o‬under J‪o‬hn Y‪o‬ung criticised the host, claiming it had ‪o‬ver-reacted and had been sl‪o‬w t‪o‬ rest‪o‬re access t‪o‬ the site, which ‪Cryptome‬ criticised as a form of cens‪o‬rship.In resp‪o‬nse, ‪Cryptome‬ plans to more widely distribute its content across multiple sites as well as releasing the planned USB stick archive. ®
  •  
    Can't happen soon enough. 
Gonzalo San Gil, PhD.

Movie Chief: Obama is Scared to Push Google, ISPs on Piracy | TorrentFreak - 1 views

    • Gonzalo San Gil, PhD.
       
      "Movie chiefs"steer piracy to obtain control over Internet Media flow. Full stop. http://www.neowin.net/news/bittorrent-downloads-linked-to-hollywood-film-studios
Gonzalo San Gil, PhD.

Piracy Can Boost Digital Music Sales, Research Shows - TorrentFreak - 0 views

  •  
    Ernesto on January 21, 2016 C: 18 News A new academic paper published by the Economics Department of Queen's University examines the link between BitTorrent downloads and music album sales. The study shows that depending on the circumstances, piracy can hurt sales or give it a boost through free promotion.
Paul Merrell

Bulk Collection Under Section 215 Has Ended… What's Next? | Just Security - 0 views

  • The first (and thus far only) roll-back of post-9/11 surveillance authorities was implemented over the weekend: The National Security Agency shuttered its program for collecting and holding the metadata of Americans’ phone calls under Section 215 of the Patriot Act. While bulk collection under Section 215 has ended, the government can obtain access to this information under the procedures specified in the USA Freedom Act. Indeed, some experts have argued that the Agency likely has access to more metadata because its earlier dragnet didn’t cover cell phones or Internet calling. In addition, the metadata of calls made by an individual in the United States to someone overseas and vice versa can still be collected in bulk — this takes place abroad under Executive Order 12333. No doubt the NSA wishes that this was the end of the surveillance reform story and the Paris attacks initially gave them an opening. John Brennan, the Director of the CIA, implied that the attacks were somehow related to “hand wringing” about spying and Sen. Tom Cotton (R-Ark.) introduced a bill to delay the shut down of the 215 program. Opponents of encryption were quick to say: “I told you so.”
  • But the facts that have emerged thus far tell a different story. It appears that much of the planning took place IRL (that’s “in real life” for those of you who don’t have teenagers). The attackers, several of whom were on law enforcement’s radar, communicated openly over the Internet. If France ever has a 9/11 Commission-type inquiry, it could well conclude that the Paris attacks were a failure of the intelligence agencies rather than a failure of intelligence authorities. Despite the passage of the USA Freedom Act, US surveillance authorities have remained largely intact. Section 702 of the FISA Amendments Act — which is the basis of programs like PRISM and the NSA’s Upstream collection of information from Internet cables — sunsets in the summer of 2017. While it’s difficult to predict the political environment that far out, meaningful reform of Section 702 faces significant obstacles. Unlike the Section 215 program, which was clearly aimed at Americans, Section 702 is supposedly targeted at foreigners and only picks up information about Americans “incidentally.” The NSA has refused to provide an estimate of how many Americans’ information it collects under Section 702, despite repeated requests from lawmakers and most recently a large cohort of advocates. The Section 215 program was held illegal by two federal courts (here and here), but civil attempts to challenge Section 702 have run into standing barriers. Finally, while two review panels concluded that the Section 215 program provided little counterterrorism benefit (here and here), they found that the Section 702 program had been useful.
  • There is, nonetheless, some pressure to narrow the reach of Section 702. The recent decision by the European Court of Justice in the safe harbor case suggests that data flows between Europe and the US may be restricted unless the PRISM program is modified to protect the information of Europeans (see here, here, and here for discussion of the decision and reform options). Pressure from Internet companies whose business is suffering — estimates run to the tune of $35 to 180 billion — as a result of disclosures about NSA spying may also nudge lawmakers towards reform. One of the courts currently considering criminal cases which rely on evidence derived from Section 702 surveillance may hold the program unconstitutional either on the basis of the Fourth Amendment or Article III for the reasons set out in this Brennan Center report. A federal district court in Colorado recently rejected such a challenge, although as explained in Steve’s post, the decision did not seriously explore the issues. Further litigation in the European courts too could have an impact on the debate.
  • ...2 more annotations...
  • The US intelligence community’s broadest surveillance authorities are enshrined in Executive Order 12333, which primarily covers the interception of electronic communications overseas. The Order authorizes the collection, retention, and dissemination of “foreign intelligence” information, which includes information “relating to the capabilities, intentions or activities of foreign powers, organizations or persons.” In other words, so long as they are operating outside the US, intelligence agencies are authorized to collect information about any foreign person — and, of course, any Americans with whom they communicate. The NSA has conceded that EO 12333 is the basis of most of its surveillance. While public information about these programs is limited, a few highlights give a sense of the breadth of EO 12333 operations: The NSA gathers information about every cell phone call made to, from, and within the Bahamas, Mexico, Kenya, the Philippines, and Afghanistan, and possibly other countries. A joint US-UK program tapped into the cables connecting internal Yahoo and Google networks to gather e-mail address books and contact lists from their customers. Another US-UK collaboration collected images from video chats among Yahoo users and possibly other webcam services. The NSA collects both the content and metadata of hundreds of millions of text messages from around the world. By tapping into the cables that connect global networks, the NSA has created a database of the location of hundreds of millions of mobile phones outside the US.
  • Given its scope, EO 12333 is clearly critical to those seeking serious surveillance reform. The path to reform is, however, less clear. There is no sunset provision that requires action by Congress and creates an opportunity for exposing privacy risks. Even in the unlikely event that Congress was inclined to intervene, it would have to address questions about the extent of its constitutional authority to regulate overseas surveillance. To the best of my knowledge, there is no litigation challenging EO 12333 and the government doesn’t give notice to criminal defendants when it uses evidence derived from surveillance under the order, so the likelihood of a court ruling is slim. The Privacy and Civil Liberties Oversight Board is currently reviewing two programs under EO 12333, but it is anticipated that much of its report will be classified (although it has promised a less detailed unclassified version as well). While the short-term outlook for additional surveillance reform is challenging, from a longer-term perspective, the distinctions that our law makes between Americans and non-Americans and between domestic and foreign collection cannot stand indefinitely. If the Fourth Amendment is to meaningfully protect Americans’ privacy, the courts and Congress must come to grips with this reality.
Paul Merrell

Challenge to data transfer tool used by Facebook will go to Europe's top court | TechCr... - 1 views

  • The five-week court hearing in what is a complex case delving into detail on US surveillance operations took place in February. The court issued its ruling today. The 153-page ruling&nbsp;starts by noting “this is an unusual case”, before going into a detailed discussion of the arguments and concluding that the DPC’s concerns about the validity of SCCs should be referred to the European Court of Justice for a preliminary ruling. Schrems is also the man responsible for bringing, in 2013, a legal challenge that ultimately struck down Safe Harbor — the legal mechanism that had oiled the pipe for EU-US personal data flows for fifteen years before the ECJ ruled it to be invalid in October 2015. Schrems’ argument had centered on U.S. government mass surveillance programs, as disclosed via the Snowden leaks, being incompatible with fundamental European privacy rights. After the ECJ struck down Safe Harbor he then sought to apply the same arguments against Facebook’s use of SCCs — returning to Ireland to make the complaint as that’s where the company has its European HQ. It’s worth noting that the European Commission has since replaced Safe Harbor with a new (and it claims more robust) data transfer mechanism, called the EU-US Privacy Shield — which is now, as Safe Harbor was, used by thousands of businesses. Although that too is facing legal challenges as critics continue to argue there is a core problem of incompatibility between two distinct legal regimes where EU privacy rights collide with US mass surveillance.
  • Schrems’ Safe Harbor challenge also started in the Irish Court before being ultimately referred to the ECJ. So there’s more than a little legal deja vu here, especially given the latest development in the case. In its ruling on the SCC issue, the Irish Court noted that a US ombudsperson position created under Privacy Shield to handle EU citizens complaints about companies’ handling of their data is not enough to overcome what it described as “well founded concerns” raised by the DPC regarding the adequacy of the protections for EU citizens data.
  • Making a video statement outside court in Dublin today,&nbsp;Schrems said the Irish court had dismissed Facebook’s argument that the US government does not undertake any surveillance.
  • ...3 more annotations...
  • In a written statement on the ruling Schrems added: “I welcome the judgement by the Irish High Court. It is important that a neutral Court outside of the US has summarized the facts on US surveillance in a judgement, after diving through more than 45,000 pages of documents in a five week hearing.
  • On Facebook, he also said:&nbsp;“In simple terms, US law requires Facebook to help the NSA with mass surveillance and EU law prohibits just that. As Facebook is subject to both jurisdictions, they got themselves in a legal dilemma that they cannot possibly solve in the long run.”
  • While Schrems’ original complaint pertained to Facebook, the Irish DPC’s position means many more companies that use the mechanism could face disruption if SCCs are ultimately invalidated as a result of the legal challenge to their validity.
Paul Merrell

Washington becomes first state to pass law protecting net neutrality - Mar. 6, 2018 - 0 views

  • n a bipartisan effort, the state's legislators passed House Bill 2282. which was signed into law Monday by Gov. Jay Inslee. "Washington will be the first state in the nation to preserve the open internet," Inslee said at the bill signing. The state law, approved by the legislature last month, is to safeguard net neutrality protections, which have been repealed by the Federal Communications Commission and are scheduled to officially end April 23. Net neutrality requires internet service providers to treat all online content the same, meaning they can't deliberately speed up or slow down traffic from specific websites to put their own content at advantage over rivals. The FCC's decision to overturn net neutrality has been championed by the telecom industry, but widely criticized by technology companies and consumer advocacy groups. Attorneys general from more than 20 red and blue states filed a lawsuit in January to stop the repeal. Inslee said the new measure would protect an open internet in Washington, which he described as having "allowed the free flow of information and ideas in one of the greatest demonstrations of free speech in our history." HB2282 bars internet service providers in the state from blocking content, applications, or services, or slowing down traffic on the basis of content or whether they got paid to favor certain traffic. The law goes into effect June 6.
Paul Merrell

From Radio to Porn, British Spies Track Web Users' Online Identities - 1 views

  • HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the Internet.” Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media and news websites, search engines, chat forums, and blogs. The mass surveillance operation&nbsp;—&nbsp;code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into Internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
  • Amid a renewed push from the U.K. government for more surveillance powers, more than two dozen documents being disclosed today by The Intercept reveal for the first time several major strands of GCHQ’s existing electronic eavesdropping capabilities.
  • The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails and Internet browsing logs of Brits, Americans, and any other citizens&nbsp;— all without a court order or judicial warrant
  • ...17 more annotations...
  • A huge volume of the Internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis. Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events”&nbsp;— a term the agency uses to refer to metadata records&nbsp;— with about 10 billion new entries added every day. As of March 2009, the largest slice of data Black Hole held&nbsp;— 41 percent&nbsp;— was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.
  • Throughout this period, as smartphone sales started to boom, the frequency of people’s Internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data. By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it said would be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.”
  • A document from the GCHQ target analysis center (GTAC) shows the Black Hole repository’s structure.
  • The data is searched by GCHQ analysts in a hunt for behavior online that could be connected to terrorism or other criminal activity. But it has also served a broader and more controversial purpose&nbsp;— helping the agency hack into European companies’ computer networks. In the lead up to its secret mission targeting Netherlands-based Gemalto, the largest SIM card manufacturer in the world, GCHQ used MUTANT BROTH in an effort to identify the company’s employees so it could hack into their computers. The system helped the agency analyze intercepted Facebook cookies it believed were associated with Gemalto staff located at offices in France and Poland. GCHQ later successfully infiltrated Gemalto’s internal networks, stealing encryption keys produced by the company that protect the privacy of cell phone communications.
  • Similarly, MUTANT BROTH proved integral to GCHQ’s hack of Belgian telecommunications provider Belgacom. The agency entered IP addresses associated with Belgacom into MUTANT BROTH to uncover information about the company’s employees. Cookies associated with the IPs revealed the Google, Yahoo, and LinkedIn accounts of three Belgacom engineers, whose computers were then targeted by the agency and infected with malware. The hacking operation resulted in GCHQ gaining deep access into the most sensitive parts of Belgacom’s internal systems, granting British spies the ability to intercept communications passing through the company’s networks.
  • In March, a U.K. parliamentary committee published the findings of an 18-month review of&nbsp;GCHQ’s operations and called for an overhaul of the laws that regulate the spying. The committee raised concerns about the agency gathering what it described as “bulk personal datasets” being held about “a wide range of people.” However, it censored the section of the report describing what these “datasets” contained, despite acknowledging that they “may be highly intrusive.” The Snowden documents shine light on some of the core GCHQ bulk data-gathering programs that the committee was likely referring to&nbsp;— pulling back the veil of secrecy that has shielded some of the agency’s most controversial surveillance operations from public scrutiny. KARMA POLICE and MUTANT BROTH are among the key bulk collection systems. But they do not operate in isolation&nbsp;— and the scope of GCHQ’s spying extends far beyond them.
  • The agency operates a bewildering array of other eavesdropping systems, each serving its own specific purpose and designated a unique code name, such as: SOCIAL ANTHROPOID, which is used to analyze metadata on emails, instant messenger chats, social media connections and conversations, plus “telephony” metadata about phone calls, cell phone locations, text and multimedia messages; MEMORY HOLE, which logs queries entered into search engines and associates each search with an IP address; MARBLED GECKO, which sifts through details about searches people have entered into Google Maps and Google Earth; and INFINITE MONKEYS, which analyzes data about the usage of online bulletin boards and forums. GCHQ has other programs that it uses to analyze the content of intercepted communications, such as the full written body of emails and the audio of phone calls. One of the most important content collection capabilities is TEMPORA, which mines vast amounts of emails, instant messages, voice calls and other communications and makes them accessible through a Google-style search tool named XKEYSCORE.
  • As of September 2012, TEMPORA was collecting “more than 40 billion pieces of content a day” and it was being used to spy on people across Europe, the Middle East, and North Africa, according to a top-secret memo outlining the scope of the program. The existence of TEMPORA was first revealed by The Guardian in June 2013. To analyze all of the communications it intercepts and to build a profile of the individuals it is monitoring, GCHQ uses a variety of different tools that can pull together all of the relevant information and make it accessible through a single interface. SAMUEL PEPYS is one such tool, built by the British spies to analyze both the content and metadata of emails, browsing sessions, and instant messages as they are being intercepted in real time. One screenshot of SAMUEL PEPYS in action shows the agency using it to monitor an individual in Sweden who visited a page about GCHQ on the U.S.-based anti-secrecy website Cryptome.
  • Partly due to the U.K.’s geographic location&nbsp;— situated between the United States and the western edge of continental Europe&nbsp;— a large amount of the world’s Internet traffic passes through its territory across international data cables. In 2010, GCHQ noted that what amounted to “25 percent of all Internet traffic” was transiting the U.K. through some 1,600 different cables. The agency said that it could “survey the majority of the 1,600” and “select the most valuable to switch into our processing systems.”
  • According to Joss Wright, a research fellow at the University of Oxford’s Internet Institute, tapping into the cables allows GCHQ to monitor a large portion of foreign communications. But the cables also transport masses of wholly domestic British emails and online chats, because when anyone in the U.K. sends an email or visits a website, their computer will routinely send and receive data from servers that are located overseas. “I could send a message from my computer here [in England] to my wife’s computer in the next room and on its way it could go through the U.S., France, and other countries,” Wright says. “That’s just the way the Internet is designed.” In other words, Wright adds, that means “a lot” of British data and communications transit across international cables daily, and are liable to be swept into GCHQ’s databases.
  • A map from a classified GCHQ presentation about intercepting communications from undersea cables. GCHQ is authorized to conduct dragnet surveillance of the international data cables through so-called external warrants that are signed off by a government minister. The external warrants permit the agency to monitor communications in foreign countries as well as British citizens’ international calls and emails&nbsp;— for example, a call from Islamabad to London. They prohibit GCHQ from reading or listening to the content of “internal” U.K. to U.K. emails and phone calls, which are supposed to be filtered out from GCHQ’s systems if they are inadvertently intercepted unless additional authorization is granted to scrutinize them. However, the same rules do not apply to metadata. A little-known loophole in the law allows GCHQ to use external warrants to collect and analyze bulk metadata about the emails, phone calls, and Internet browsing activities of British people, citizens of closely allied countries, and others, regardless of whether the data is derived from domestic U.K. to U.K. communications and browsing sessions or otherwise. In March, the existence of this loophole was quietly acknowledged by the U.K. parliamentary committee’s surveillance review, which stated in a section of its report that “special protection and additional safeguards” did not apply to metadata swept up using external warrants and that domestic British metadata could therefore be lawfully “returned as a result of searches” conducted by GCHQ.
  • Perhaps unsurprisingly, GCHQ appears to have readily exploited this obscure legal technicality. Secret policy guidance papers issued to the agency’s analysts instruct them that they can sift through huge troves of indiscriminately collected metadata records to spy on anyone regardless of their nationality. The guidance makes clear that there is no exemption or extra privacy protection for British people or citizens from countries that are members of the Five Eyes, a surveillance alliance that the U.K. is part of alongside the U.S., Canada, Australia, and New Zealand. “If you are searching a purely Events only database such as MUTANT BROTH, the issue of location does not occur,” states one internal GCHQ policy document, which is marked with a “last modified” date of July 2012. The document adds that analysts are free to search the databases for British metadata “without further authorization” by inputing a U.K. “selector,” meaning a unique identifier such as a person’s email or IP address, username, or phone number. Authorization is “not needed for individuals in the U.K.,” another GCHQ document explains, because metadata has been judged “less intrusive than communications content.” All the spies are required to do to mine the metadata troves is write a short “justification” or “reason” for each search they conduct and then click a button on their computer screen.
  • Intelligence GCHQ collects on British persons of interest is shared with domestic security agency MI5, which usually takes the lead on spying operations within the U.K. MI5 conducts its own extensive domestic surveillance as part of a program called DIGINT (digital intelligence).
  • GCHQ’s documents suggest that it typically retains metadata for periods of between 30 days to six months. It stores the content of communications for a shorter period of time, varying between three to 30 days. The retention periods can be extended if deemed necessary for “cyber defense.” One secret policy paper dated from January 2010 lists the wide range of information the agency classes as metadata&nbsp;— including location data that could be used to track your movements, your email, instant messenger, and social networking “buddy lists,” logs showing who you have communicated with by phone or email, the passwords you use to access “communications services” (such as an email account), and information about websites you have viewed.
  • Records showing the full website addresses you have visited&nbsp;— for instance, www.gchq.gov.uk/what_we_do&nbsp;— are treated as content. But the first part of an address you have visited&nbsp;— for instance, www.gchq.gov.uk&nbsp;— is treated as metadata. In isolation, a single metadata record of a phone call, email, or website visit may not reveal much about a person’s private life, according to Ethan Zuckerman, director of Massachusetts Institute of Technology’s Center for Civic Media. But if accumulated and analyzed over a period of weeks or months, these details would be “extremely personal,” he told The Intercept, because they could reveal a person’s movements, habits, religious beliefs, political views, relationships, and even sexual preferences. For Zuckerman, who has studied the social and political ramifications of surveillance, the most concerning aspect of large-scale government data collection is that it can be “corrosive towards democracy”&nbsp;— leading to a chilling effect on freedom of expression and communication. “Once we know there’s a reasonable chance that we are being watched in one fashion or another it’s hard for that not to have a ‘panopticon effect,’” he said, “where we think and behave differently based on the assumption that people may be watching and paying attention to what we are doing.”
  • When compared to surveillance rules in place in the U.S., GCHQ notes in one document that the U.K. has “a light oversight regime.” The more lax British spying regulations are reflected in secret internal rules that highlight greater restrictions on how NSA databases can be accessed. The NSA’s troves can be searched for data on British citizens, one document states, but they cannot be mined for information about Americans or other citizens from countries in the Five Eyes alliance. No such constraints are placed on GCHQ’s own databases, which can be sifted for records on the phone calls, emails, and Internet usage of Brits, Americans, and citizens from any other country. The scope of GCHQ’s surveillance powers explain in part why Snowden told The Guardian in June 2013 that U.K. surveillance is “worse than the U.S.” In an interview with Der Spiegel in July 2013, Snowden added that British Internet cables were “radioactive” and joked: “Even the Queen’s selfies to the pool boy get logged.”
  • In recent years, the biggest barrier to GCHQ’s mass collection of data does not appear to have come in the form of legal or policy restrictions. Rather, it is the increased use of encryption technology that protects the privacy of communications that has posed the biggest potential hindrance to the agency’s activities. “The spread of encryption … threatens our ability to do effective target discovery/development,” says a top-secret report co-authored by an official from the British agency and an NSA employee in 2011. “Pertinent metadata events will be locked within the encrypted channels and difficult, if not impossible, to prise out,” the report says, adding that the agencies were working on a plan that would “(hopefully) allow our Internet Exploitation strategy to prevail.”
Paul Merrell

The De-Americanization of Internet Freedom - Lawfare - 0 views

  • Why did the internet freedom agenda fail? Goldsmith’s essay tees up, but does not fully explore, a range of explanatory hypotheses. The most straightforward have to do with unrealistic expectations and unintended consequences. The idea that a minimally regulated internet would usher in an era of global peace, prosperity, and mutual understanding, Goldsmith tells us, was always a fantasy. As a project of democracy and human rights promotion, the internet freedom agenda was premised on a wildly overoptimistic view about the capacity of information flows, on their own, to empower oppressed groups and effect social change. Embracing this market-utopian view led the United States to underinvest in cybersecurity, social media oversight, and any number of other regulatory tools. In suggesting this interpretation of where U.S. policymakers and their civil society partners went wrong, Goldsmith’s essay complements recent critiques of the neoliberal strains in the broader human rights and transparency movements. Perhaps, however, the internet freedom agenda has faltered not because it was so naïve and unrealistic, but because it was so effective at achieving its realist goals. The seeds of this alternative account can be found in Goldsmith’s concession that the commercial non-regulation principle helped companies like Apple, Google, Facebook, and Amazon grab “huge market share globally.” The internet became an increasingly valuable cash cow for U.S. firms and an increasingly potent instrument of U.S. soft power over the past two decades; foreign governments, in due course, felt compelled to fight back. If the internet freedom agenda is understood as fundamentally a national economic project, rather than an international political or moral crusade, then we might say that its remarkable early success created the conditions for its eventual failure. Goldsmith’s essay also points to a third set of possible explanations for the collapse of the internet freedom agenda, involving its internal contradictions. Magaziner’s notion of a completely deregulated marketplace, if taken seriously, is incoherent. As Goldsmith and Tim Wu have discussed elsewhere, it takes quite a bit of regulation for any market, including markets related to the internet, to exist and to work. And indeed, even as Magaziner proposed “complete deregulation” of the internet, he simultaneously called for new legal protections against computer fraud and copyright infringement, which were soon followed by extensive U.S. efforts to penetrate foreign networks and to militarize cyberspace. Such internal dissonance was bound to invite charges of opportunism, and to render the American agenda unstable.
Paul Merrell

The Internet May Be Underwater in 15 Years - 1 views

  • When the internet goes down, life as the modern American knows it grinds to a halt. Gone are the cute kitten photos and the Facebook status updates—but also gone are the signals telling stoplights to change from green to red, and doctors’ access to online patient records. A vast web of physical infrastructure undergirds the internet connections that touch nearly every aspect of modern life. Delicate fiber optic cables, massive data transfer stations, and power stations create a patchwork of literal nuts and bolts that facilitates the flow of zeros and ones. Now, research shows that a whole lot of that infrastructure sits squarely in the path of rising seas. (See what the planet would look like if all the ice melted.) Scientists mapped out the threads and knots of internet infrastructure in the U.S. and layered that on top of maps showing future sea level rise. What they found was ominous: Within 15 years, thousands of miles of fiber optic cable—and hundreds of pieces of other key infrastructure—are likely to be swamped by the encroaching ocean. And while some of that infrastructure may be water resistant, little of it was designed to live fully underwater. “So much of the infrastructure that's been deployed is right next to the coast, so it doesn't take much more than a few inches or a foot of sea level rise for it to be underwater,” says study coauthor Paul Barford, a computer scientist at the University of Wisconsin, Madison. “It was all was deployed 20ish years ago, when no one was thinking about the fact that sea levels might come up.”
  • “This will be a big problem,” says Rae Zimmerman, an expert on urban adaptation to climate change at NYU. Large parts of internet infrastructure soon “will be underwater, unless they're moved back pretty quickly.”
Paul Merrell

Press corner | European Commission - 0 views

  • The European Commission has informed Amazon of its preliminary view that it has breached EU antitrust rules by distorting competition in online retail markets. The Commission takes issue with Amazon systematically relying on non-public business data of independent sellers who sell on its marketplace, to the benefit of Amazon's own retail business, which directly competes with those third party sellers. The Commission also opened a second formal antitrust investigation into the possible preferential treatment of Amazon's own retail offers and those of marketplace sellers that use Amazon's logistics and delivery services. Executive Vice-President Margrethe Vestager, in charge of competition policy, said: “We must ensure that dual role platforms with market power, such as Amazon, do not distort competition. &nbsp;Data on the activity of third party sellers should not be used to the benefit of Amazon when it acts as a competitor to these sellers. The conditions of competition on the Amazon platform must also be fair. &nbsp;Its rules should not artificially favour Amazon's own retail offers or advantage the offers of retailers using Amazon's logistics and delivery services.&nbsp;With e-commerce booming, and Amazon being the leading e-commerce platform, a fair and undistorted access to consumers online is important for all sellers.”
  • Amazon has a dual role as a platform: (i) it provides a marketplace where independent sellers can sell products directly to consumers; and (ii) it sells products as a retailer on the same marketplace, in competition with those sellers. As a marketplace service provider, Amazon has access to non-public business data of third party sellers such as the number of ordered and shipped units of products, the sellers' revenues on the marketplace, the number of visits to sellers' offers, data relating to shipping, to sellers' past performance, and other consumer claims on products, including the activated guarantees. The Commission's preliminary findings show that very large quantities of non-public seller data are available to employees of Amazon's retail business and flow directly into the automated systems of that business, which aggregate these data and use them to calibrate Amazon's retail offers and strategic business decisions to the detriment of the other marketplace sellers. For example, it allows Amazon to focus its offers in the best-selling products across product categories and to adjust its offers in view of non-public data of competing sellers. The Commission's preliminary view, outlined in its Statement of Objections, is that the use of non-public marketplace seller data allows Amazon to avoid the normal risks of retail competition and to leverage its dominance in the market for the provision of marketplace services in France and Germany- the biggest markets for Amazon in the EU. If confirmed, this would infringe Article 102 of the Treaty on the Functioning of the European Union (TFEU) that prohibits the abuse of a dominant market position.
  •  
    "In addition, the Commission opened a second antitrust investigation into Amazon's business practices that might artificially favour its own retail offers and offers of marketplace sellers that use Amazon's logistics and delivery services (the so-called "fulfilment by Amazon or FBA sellers"). In particular, the Commission will investigate whether the criteria that Amazon sets to select the winner of the "Buy Box" and to enable sellers to offer products to Prime users, under Amazon's Prime loyalty programme, lead to preferential treatment of Amazon's retail business or of the sellers that use Amazon's logistics and delivery services. The "Buy Box" is displayed prominently on Amazon's websites and allows customers to add items from a specific retailer directly into their shopping carts. Winning the "Buy Box" (i.e. being chosen as the offer that features in this box) is crucial to marketplace sellers as the Buy Box prominently shows the offer of one single seller for a chosen product on Amazon's marketplaces, and generates the vast majority of all sales. The other aspect of the investigation focusses on the possibility for marketplace sellers to effectively reach Prime users. Reaching these consumers is important to sellers because the number of Prime users is continuously growing and because they tend to generate more sales on Amazon's marketplaces than non-Prime users. If proven, the practice under investigation may breach Article 102 of the Treaty on the Functioning of the European Union (TFEU) that prohibits the abuse of a dominant market position. The Commission will now carry out its in-depth investigation as a matter of priority"
  •  
    On the filed charges, the violation seems to be fairly clear-cut and straightforward to prove. (DG Competition has really outstanding lawyers.) I suspect the real fight here will be over the remedy.
‹ Previous 21 - 40 of 40
Showing 20 items per page