Skip to main content

Home/ Future of the Web/ Group items tagged third

Rss Feed Group items tagged

Paul Merrell

4 Simple Changes to Stop Online Tracking | Electronic Frontier Foundation - 0 views

  • 4 Simple Changes to Stop Online Tracking <b>Whoa, you aren't browsing with Javascript, congratulations! You probably don't need this tutorial, which will look broken for you. Just install an adblocker with a privacy/tracking protection list, block third-party cookies, block referers, and install HTTPS Everywhere. </b><br> In less than 10 minutes, you can drastically improve your privacy online and protect yourself against unwanted and invisible tracking. Note that these privacy safeguards will also be blocking some ads. EFF is working with online advertisers to try to convince them to provide real privacy protections for users, but until they agree to meaningful standards about online tracking, these steps will be necessary for users to safeguard their browsing privacy. Aside from removing ads, these changes won't affect your browsing experience on the vast majority of websites. It's possible, however, that a tiny fraction of websites may behave differently or break, in which case the easiest solution is to temporarily use a "private browsing" mode without the settings enabled, or a fresh browser profile/user with default settings.
Paul Merrell

Supreme Court Says Phones Can't Be Searched Without a Warrant - NYTimes.com - 0 views

  • In a sweeping victory for privacy rights in the digital age, the Supreme Court on Wednesday unanimously ruled that the police need warrants to search the cellphones of people they arrest.While the decision will offer protection to the 12 million people arrested every year, many for minor crimes, its impact will most likely be much broader. The ruling almost certainly also applies to searches of tablet and laptop computers, and its reasoning may apply to searches of homes and businesses and of information held by third parties like phone companies.“This is a bold opinion,” said Orin S. Kerr, a law professor at George Washington University. “It is the first computer-search case, and it says we are in a new digital age. You can’t apply the old rules anymore.”
  •  
    It is now beyond doubt that the Supreme Court is declining to authorize an Orwellian government surveillance future for the U.S. This sweeping, unanimous ruling definitely has broad application beyond cellphones, in no small part because the court recognized that cellphones of today are more like desktop computers and a host of other computerized devices than they are like the telephones of yesteryear. Hence, almost everything the court said afterward about the privacy rights in cellphones applies equally to all personal use computers. 
Gonzalo San Gil, PhD.

Guest Post: Five Reasons Why The Major Labels Didn't Blow It With Napster by @thetrickn... - 1 views

    • Gonzalo San Gil, PhD.
       
      # ! #Industry (#Politics) just don't want to share their business (of culture/thinking/VALUES Manipulation) with third partires...
  •  
    [ay 30, 2015 Editor Charlie Leave a comment Go to comments [Editor Charlie sez: We're pleased to get a chance to repost this must read piece by industry veteran Jim McDermott who brings great insights into the Napster history and the flaws in the narrative that the tech press has so eagerly promoted. You can also read Chris's 2008 interview about Napster with Andrew Orlowski in The Register, The Music Wars from 30,000 Feet.] ...]
Paul Merrell

NSA Surveillance: Snowden Docs Raise Questions About U.S. Phone Calls - 0 views

  • Third in a series. Part 1 here; Part 2 here. When it comes to the National Security Agency’s recently disclosed use of automated speech recognition technology to search, index and transcribe voice communications, people in the United States may well be asking: But are they transcribing my phone calls? The answer is maybe. A clear-cut answer is elusive because documents in the Snowden archive describe the capability to turn speech into text, but not the extent of its use — and the U.S. intelligence community refuses to answer even the most basic questions on the topic.
  • thanks to previous explorations of the Snowden archive and some documents released by the Obama administration, we know there are four major methods the NSA uses to get access to phone calls involving Americans — and only one of them technically precludes the use of speech recognition.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

Federal court rules in favor of NSA bulk snooping, White House happy - RT USA - 3 views

  •  
    "Despite the opposition of the US public and lawmakers to NSA surveillance, the courts keep handing the Obama administration the license to snoop. A US appeals court just threw out a 2013 verdict against the NSA, to White House approval. "
  • ...1 more comment...
  •  
    I've read the court's decision. The article in RT overstates the breadth of the court's holding very substantially. The court did not throw the case out. Instead, by a 2-1 vote it vacated the district court's grant of a preliminary injunction and remanded the case for further proceedings including for the lower court judge to decide whether discovery should be allowed. The third judge would have thrown the case out. The decision does, however, steepen the slope the plaintiffs must climb to prevail in a renewed effort to obtain an injunction. That is regrettable, in my view. The article states: "The decision vindicates the government's stance that NSA's bulk surveillance programs are constitutional, the White House said Friday." In fact, the court's decision does not even touch on the topic of the program's constitutionality, reaching only the issue of standing. The article should either have omitted the statement or pointed out the error in the government's statement.
  •  
    # ! thank You, Paul, for the observation. anyway, what it seems is that Citizens worldwide are going to be spied... judges aside, and -I'm afraid- not always with 'security issues' in the Agency's mind...
  •  
    I agree, Gonzalo. Most of the "terrorist" groups the U.S. claims to be concerned with were in fact created by the U.S. Terrorism is simply the easiest means for the government to defend these surveillance programs. But the disclosures that the NSA spies for other purposes just doesn't get the coverage in mainstream media that might otherwise force changes. It's the Politics of Fear.
Paul Merrell

Fourth Circuit adopts mosaic theory, holds that obtaining "extended" cell-site records ... - 0 views

  • A divided Fourth Circuit has ruled, in United States v. Graham, that “the government conducts a search under the Fourth Amendment when it obtains and inspects a cell phone user’s historical [cell-site location information] for an extended period of time” and that obtaining such records requires a warrant. The new case creates multiple circuit splits, which may lead to Supreme Court review. Specifically, the decision creates a clear circuit split with the Fifth and Eleventh Circuits on whether acquiring cell-site records is a search. It also creates an additional clear circuit split with the Eleventh Circuit on whether, if cell-site records are protected, a warrant is required. Finally, it also appears to deepen an existing split between the Fifth and Third Circuits on whether the Stored Communications Act allows the government to choose whether to obtain an intermediate court order or a warrant for cell-site records. This post will cover the reasoning of the new case in detail.
Paul Merrell

EurActiv.com - EU to oblige Microsoft to offer competitors' browsers | EU - European In... - 0 views

  • "If the Commission's preliminary conclusions as outlined in the recent statement of objections were confirmed, the Commission would intend to impose remedies that enabled users and manufacturers to make an unbiased choice between Internet Explorer and competing third party web browsers," Jonathan Todd, spokesperson for EU Competition Commissioner Neelie Kroes, told EurActiv.
  • This line stems from the mistakes the Commission recognised it had made by imposing remedies on Microsoft in the Media Player case (see background). Indeed, although Microsoft is now obliged to offer a version of Windows without Media Player, for the most part, users are opting for the readily available bundled offer, which provides extra software at the same price. "That remedy was rubbish," acknowledged an official in the Commission's competition department. 
Paul Merrell

GooSoft shapes super White Space database * The Register - 0 views

  • The world's largest software and search companies Wednesday announced the formation of the White Spaces Database Group with PC and broadcasting hardware and services specialists Dell, Hewlett Packard, Motorola, Comsearch, and NeuStar.
  • The White Spaces Database Group comes after months of concerted lobbying of the US Federal Communications Commission (FCC) by Microsoft, Google, and the other companies to make unused TV frequencies - white spaces - available for internet access by PCs and other devices.
  • The FCC last November ruled against broadcasters and said it would open up white spaces, but in a concession to their concerns, it stipulated the need for an online database that devices accessing the spectrum must read in order to find out what channels they are allowed to use. The database should be built and run by a third party and will be selected through a "public process."
Paul Merrell

PC World - Business Center: Yahoo's Zimbra Reaches for the Cloud - 0 views

  • Yahoo's Zimbra, the provider of a communications and collaboration suite that rivals Microsoft's Office and Outlook/Exchange, will make its cloud computing debut on Tuesday.Zimbra has so far relied entirely on third-party hosting partners to offer the SaaS (software-as-a-service) version of its suite.However, it is now taking advantage of Yahoo's data centers to become a hosting provider for customers in the education market that want to access the suite on demand.
Maluvia Haseltine

Tonido - Privacy and Online Freedom from third-party servers | Personal Web Application... - 0 views

  •  
    Tonido - Cloud Computing without the Cloud?
Paul Merrell

A Short Guide to the Internet's Biggest Enemies | Electronic Frontier Foundation - 1 views

  • Reporters Without Borders (RSF) released its annual “Enemies of the Internet” index this week—a ranking first launched in 2006 intended to track countries that repress online speech, intimidate and arrest bloggers, and conduct surveillance of their citizens.  Some countries have been mainstays on the annual index, while others have been able to work their way off the list.  Two countries particularly deserving of praise in this area are Tunisia and Myanmar (Burma), both of which have stopped censoring the Internet in recent years and are headed in the right direction toward Internet freedom. In the former category are some of the world’s worst offenders: Cuba, North Korea, China, Iran, Saudi Arabia, Vietnam, Belarus, Bahrain, Turkmenistan, Syria.  Nearly every one of these countries has amped up their online repression in recent years, from implementing sophisticated surveillance (Syria) to utilizing targeted surveillance tools (Vietnam) to increasing crackdowns on online speech (Saudi Arabia).  These are countries where, despite advocacy efforts by local and international groups, no progress has been made. The newcomers  A third, perhaps even more disheartening category, is the list of countries new to this year's index.  A motley crew, these nations have all taken new, harsh approaches to restricting speech or monitoring citizens:
  • United States: This is the first time the US has made it onto RSF’s list.  While the US government doesn’t censor online content, and pours money into promoting Internet freedom worldwide, the National Security Agency’s unapologetic dragnet surveillance and the government’s treatment of whistleblowers have earned it a spot on the index. United Kingdom: The European nation has been dubbed by RSF as the “world champion of surveillance” for its recently-revealed depraved strategies for spying on individuals worldwide.  The UK also joins countries like Ethiopia and Morocco in using terrorism laws to go after journalists.  Not noted by RSF, but also important, is the fact that the UK is also cracking down on legal pornography, forcing Internet users to opt-in with their ISP if they wish to view it and creating a slippery slope toward overblocking.  This is in addition to the government’s use of an opaque, shadowy NGO to identify child sexual abuse images, sometimes resulting instead in censorship of legitimate speech.
Paul Merrell

Bankrolled by broadband donors, lawmakers lobby FCC on net neutrality | Ars Technica - 1 views

  • The 28 House members who lobbied the Federal Communications Commission to drop net neutrality this week have received more than twice the amount in campaign contributions from the broadband sector than the average for all House members. These lawmakers, including the top House leadership, warned the FCC that regulating broadband like a public utility "harms" providers, would be "fatal to the Internet," and could "limit economic freedom."​ According to research provided Friday by Maplight, the 28 House members received, on average, $26,832 from the "cable & satellite TV production & distribution" sector over a two-year period ending in December. According to the data, that's 2.3 times more than the House average of $11,651. What's more, one of the lawmakers who told the FCC that he had "grave concern" (PDF) about the proposed regulation took more money from that sector than any other member of the House. Rep. Greg Walden (R-OR) was the top sector recipient, netting more than $109,000 over the two-year period, the Maplight data shows.
  • Dan Newman, cofounder and president of Maplight, the California research group that reveals money in politics, said the figures show that "it's hard to take seriously politicians' claims that they are acting in the public interest when their campaigns are funded by companies seeking huge financial benefits for themselves." Signing a letter to the FCC along with Walden, who chairs the House Committee on Energy and Commerce, were three other key members of the same committee: Reps. Fred Upton (R-MI), Robert Latta (R-OH), and Marsha Blackburn (R-TN). Over the two-year period, Upton took in $65,000, Latta took $51,000, and Blackburn took $32,500. In a letter (PDF) those representatives sent to the FCC two days before Thursday's raucous FCC net neutrality hearing, the four wrote that they had "grave concern" over the FCC's consideration of "reclassifying Internet broadband service as an old-fashioned 'Title II common carrier service.'" The letter added that a switchover "harms broadband providers, the American economy, and ultimately broadband consumers, actually doing so would be fatal to the Internet as we know it."
  • Not every one of the 28 members who publicly lobbied the FCC against net neutrality in advance of Thursday's FCC public hearing received campaign financing from the industry. One representative took no money: Rep. Nick Rahall (D-WV). In all, the FCC received at least three letters from House lawmakers with 28 signatures urging caution on classifying broadband as a telecommunications service, which would open up the sector to stricter "common carrier" rules, according to letters the members made publicly available. The US has long applied common carrier status to the telephone network, providing justification for universal service obligations that guarantee affordable phone service to all Americans and other rules that promote competition and consumer choice. Some consumer advocates say that common carrier status is needed for the FCC to impose strong network neutrality rules that would force ISPs to treat all traffic equally, not degrading competing services or speeding up Web services in exchange for payment. ISPs have argued that common carrier rules would saddle them with too much regulation and would force them to spend less on network upgrades and be less innovative.
  • ...2 more annotations...
  • Of the 28 House members signing on to the three letters, Republicans received, on average, $59,812 from the industry over the two-year period compared to $13,640 for Democrats, according to the Maplight data. Another letter (PDF) sent to the FCC this week from four top members of the House, including Speaker John Boehner (R-OH), Majority Leader Eric Cantor (R-VA), Majority Whip Kevin McCarthy (R-CA), and Republican Conference Chair Cathy McMorris Rodgers (R-WA), argued in favor of cable companies: "We are writing to respectfully urge you to halt your consideration of any plan to impose antiquated regulation on the Internet, and to warn that implementation of such a plan will needlessly inhibit the creation of American private sector jobs, limit economic freedom and innovation, and threaten to derail one of our economy's most vibrant sectors," they wrote. Over the two-year period, Boehner received $75,450; Cantor got $80,800; McCarthy got $33,000; and McMorris Rodgers got $31,500.
  • The third letter (PDF) forwarded to the FCC this week was signed by 20 House members. "We respectfully urge you to consider the effect that regressing to a Title II approach might have on private companies' ability to attract capital and their continued incentives to invest and innovate, as well as the potentially negative impact on job creation that might result from any reduction in funding or investment," the letter said. Here are the 28 lawmakers who lobbied the FCC this week and their reported campaign contributions:
Paul Merrell

American Surveillance Now Threatens American Business - The Atlantic - 0 views

  • What does it look like when a society loses its sense of privacy? <div><a href="http://pubads.g.doubleclick.net/gampad/jump?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" title=""><img style="border:none;" src="http://pubads.g.doubleclick.net/gampad/ad?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" alt="" /></a></div>In the almost 18 months since the Snowden files first received coverage, writers and critics have had to guess at the answer. Does a certain trend, consumer complaint, or popular product epitomize some larger shift? Is trust in tech companies eroding—or is a subset just especially vocal about it? Polling would make those answers clear, but polling so far has been… confused. A new study, conducted by the Pew Internet Project last January and released last week, helps make the average American’s view of his or her privacy a little clearer. And their confidence in their own privacy is ... low. The study's findings—and the statistics it reports—stagger. Vast majorities of Americans are uncomfortable with how the government uses their data, how private companies use and distribute their data, and what the government does to regulate those companies. No summary can equal a recounting of the findings. Americans are displeased with government surveillance en masse:   
  • A new study finds that a vast majority of Americans trust neither the government nor tech companies with their personal data.
  • According to the study, 70 percent of Americans are “at least somewhat concerned” with the government secretly obtaining information they post to social networking sites. Eighty percent of respondents agreed that “Americans should be concerned” with government surveillance of telephones and the web. They are also uncomfortable with how private corporations use their data: Ninety-one percent of Americans believe that “consumers have lost control over how personal information is collected and used by companies,” according to the study. Eighty percent of Americans who use social networks “say they are concerned about third parties like advertisers or businesses accessing the data they share on these sites.” And even though they’re squeamish about the government’s use of data, they want it to regulate tech companies and data brokers more strictly: 64 percent wanted the government to do more to regulate private data collection. Since June 2013, American politicians and corporate leaders have fretted over how much the leaks would cost U.S. businesses abroad.
  • ...3 more annotations...
  • What does it look like when a society loses its sense of privacy? <div><a href="http://pubads.g.doubleclick.net/gampad/jump?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" title=""><img style="border:none;" src="http://pubads.g.doubleclick.net/gampad/ad?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" alt="" /></a></div>In the almost 18 months since the Snowden files first received coverage, writers and critics have had to guess at the answer. Does a certain trend, consumer complaint, or popular product epitomize some larger shift? Is trust in tech companies eroding—or is a subset just especially vocal about it? Polling would make those answers clear, but polling so far has been… confused. A new study, conducted by the Pew Internet Project last January and released last week, helps make the average American’s view of his or her privacy a little clearer. And their confidence in their own privacy is ... low. The study's findings—and the statistics it reports—stagger. Vast majorities of Americans are uncomfortable with how the government uses their data, how private companies use and distribute their data, and what the government does to regulate those companies. No summary can equal a recounting of the findings. Americans are displeased with government surveillance en masse:   
  • “It’s clear the global community of Internet users doesn’t like to be caught up in the American surveillance dragnet,” Senator Ron Wyden said last month. At the same event, Google chairman Eric Schmidt agreed with him. “What occurred was a loss of trust between America and other countries,” he said, according to the Los Angeles Times. “It's making it very difficult for American firms to do business.” But never mind the world. Americans don’t trust American social networks. More than half of the poll’s respondents said that social networks were “not at all secure. Only 40 percent of Americans believe email or texting is at least “somewhat” secure. Indeed, Americans trusted most of all communication technologies where some protections has been enshrined into the law (though the report didn’t ask about snail mail). That is: Talking on the telephone, whether on a landline or cell phone, is the only kind of communication that a majority of adults believe to be “very secure” or “somewhat secure.”
  • (That may seem a bit incongruous, because making a telephone call is one area where you can be almost sure you are being surveilled: The government has requisitioned mass call records from phone companies since 2001. But Americans appear, when discussing security, to differentiate between the contents of the call and data about it.) Last month, Ramsey Homsany, the general counsel of Dropbox, said that one big thing could take down the California tech scene. “We have built this incredible economic engine in this region of the country,” said Homsany in the Los Angeles Times, “and [mistrust] is the one thing that starts to rot it from the inside out.” According to this poll, the mistrust has already begun corroding—and is already, in fact, well advanced. We’ve always assumed that the great hurt to American business will come globally—that citizens of other nations will stop using tech companies’s services. But the new Pew data shows that Americans suspect American businesses just as much. And while, unlike citizens of other nations, they may not have other places to turn, they may stop putting sensitive or delicate information online.
Paul Merrell

NZ Prime Minister John Key Retracts Vow to Resign if Mass Surveillance Is Shown - 0 views

  • In August 2013, as evidence emerged of the active participation by New Zealand in the “Five Eyes” mass surveillance program exposed by Edward Snowden, the country’s conservative Prime Minister, John Key, vehemently denied that his government engages in such spying. He went beyond mere denials, expressly vowing to resign if it were ever proven that his government engages in mass surveillance of New Zealanders. He issued that denial, and the accompanying resignation vow, in order to reassure the country over fears provoked by a new bill he advocated to increase the surveillance powers of that country’s spying agency, Government Communications Security Bureau (GCSB) — a bill that passed by one vote thanks to the Prime Minister’s guarantees that the new law would not permit mass surveillance.
  • Since then, a mountain of evidence has been presented that indisputably proves that New Zealand does exactly that which Prime Minister Key vehemently denied — exactly that which he said he would resign if it were proven was done. Last September, we reported on a secret program of mass surveillance at least partially implemented by the Key government that was designed to exploit the very law that Key was publicly insisting did not permit mass surveillance. At the time, Snowden, citing that report as well as his own personal knowledge of GCSB’s participation in the mass surveillance tool XKEYSCORE, wrote in an article for The Intercept: Let me be clear: any statement that mass surveillance is not performed in New Zealand, or that the internet communications are not comprehensively intercepted and monitored, or that this is not intentionally and actively abetted by the GCSB, is categorically false. . . . The prime minister’s claim to the public, that “there is no and there never has been any mass surveillance” is false. The GCSB, whose operations he is responsible for, is directly involved in the untargeted, bulk interception and algorithmic analysis of private communications sent via internet, satellite, radio, and phone networks.
  • A series of new reports last week by New Zealand journalist Nicky Hager, working with my Intercept colleague Ryan Gallagher, has added substantial proof demonstrating GCSB’s widespread use of mass surveillance. An article last week in The New Zealand Herald demonstrated that “New Zealand’s electronic surveillance agency, the GCSB, has dramatically expanded its spying operations during the years of John Key’s National Government and is automatically funnelling vast amounts of intelligence to the US National Security Agency.” Specifically, its “intelligence base at Waihopai has moved to ‘full-take collection,’ indiscriminately intercepting Asia-Pacific communications and providing them en masse to the NSA through the controversial NSA intelligence system XKeyscore, which is used to monitor emails and internet browsing habits.” Moreover, the documents “reveal that most of the targets are not security threats to New Zealand, as has been suggested by the Government,” but “instead, the GCSB directs its spying against a surprising array of New Zealand’s friends, trading partners and close Pacific neighbours.” A second report late last week published jointly by Hager and The Intercept detailed the role played by GCSB’s Waihopai base in aiding NSA’s mass surveillance activities in the Pacific (as Hager was working with The Intercept on these stories, his house was raided by New Zealand police for 10 hours, ostensibly to find Hager’s source for a story he published that was politically damaging to Key).
  • ...6 more annotations...
  • That the New Zealand government engages in precisely the mass surveillance activities Key vehemently denied is now barely in dispute. Indeed, a former director of GCSB under Key, Sir Bruce Ferguson, while denying any abuse of New Zealander’s communications, now admits that the agency engages in mass surveillance.
  • Meanwhile, Russel Norman, the head of the country’s Green Party, said in response to these stories that New Zealand is “committing crimes” against its neighbors in the Pacific by subjecting them to mass surveillance, and insists that the Key government broke the law because that dragnet necessarily includes the communications of New Zealand citizens when they travel in the region.
  • So now that it’s proven that New Zealand does exactly that which Prime Minister Key vowed would cause him to resign if it were proven, is he preparing his resignation speech? No: that’s something a political official with a minimal amount of integrity would do. Instead — even as he now refuses to say what he has repeatedly said before: that GCSB does not engage in mass surveillance — he’s simply retracting his pledge as though it were a minor irritant, something to be casually tossed aside:
  • When asked late last week whether New Zealanders have a right to know what their government is doing in the realm of digital surveillance, the Prime Minister said: “as a general rule, no.” And he expressly refuses to say whether New Zealand is doing that which he swore repeatedly it was not doing, as this excellent interview from Radio New Zealand sets forth: Interviewer: “Nicky Hager’s revelations late last week . . . have stoked fears that New Zealanders’ communications are being indiscriminately caught in that net. . . . The Prime Minister, John Key, has in the past promised to resign if it were found to be mass surveillance of New Zealanders . . . Earlier, Mr. Key was unable to give me an assurance that mass collection of communications from New Zealanders in the Pacific was not taking place.” PM Key: “No, I can’t. I read the transcript [of former GCSB Director Bruce Ferguson’s interview] – I didn’t hear the interview – but I read the transcript, and you know, look, there’s a variety of interpretations – I’m not going to critique–”
  • Interviewer: “OK, I’m not asking for a critique. Let’s listen to what Bruce Ferguson did tell us on Friday:” Ferguson: “The whole method of surveillance these days, is sort of a mass collection situation – individualized: that is mission impossible.” Interviewer: “And he repeated that several times, using the analogy of a net which scoops up all the information. . . . I’m not asking for a critique with respect to him. Can you confirm whether he is right or wrong?” Key: “Uh, well I’m not going to go and critique the guy. And I’m not going to give a view of whether he’s right or wrong” . . . . Interviewer: “So is there mass collection of personal data of New Zealand citizens in the Pacific or not?” Key: “I’m just not going to comment on where we have particular targets, except to say that where we go and collect particular information, there is always a good reason for that.”
  • From “I will resign if it’s shown we engage in mass surveillance of New Zealanders” to “I won’t say if we’re doing it” and “I won’t quit either way despite my prior pledges.” Listen to the whole interview: both to see the type of adversarial questioning to which U.S. political leaders are so rarely subjected, but also to see just how obfuscating Key’s answers are. The history of reporting from the Snowden archive has been one of serial dishonesty from numerous governments: such as the way European officials at first pretended to be outraged victims of NSA only for it to be revealed that, in many ways, they are active collaborators in the very system they were denouncing. But, outside of the U.S. and U.K. itself, the Key government has easily been the most dishonest over the last 20 months: one of the most shocking stories I’ve seen during this time was how the Prime Minister simultaneously plotted in secret to exploit the 2013 proposed law to implement mass surveillance at exactly the same time that he persuaded the public to support it by explicitly insisting that it would not allow mass surveillance. But overtly reneging on a public pledge to resign is a new level of political scandal. Key was just re-elected for his third term, and like any political official who stays in power too long, he has the despot’s mentality that he’s beyond all ethical norms and constraints. But by the admission of his own former GCSB chief, he has now been caught red-handed doing exactly that which he swore to the public would cause him to resign if it were proven. If nothing else, the New Zealand media ought to treat that public deception from its highest political official with the level of seriousness it deserves.
  •  
    It seems the U.S. is not the only nation that has liars for head of state. 
Paul Merrell

Remaining Snowden docs will be released to avert 'unspecified US war' - ‪Cryp... - 1 views

  • All the remaining Snowden documents will be released next month, according t‪o‬ whistle-blowing site ‪Cryptome, which said in a tweet that the release of the info by unnamed third parties would be necessary to head off an unnamed "war".‬‪Cryptome‬ said it would "aid and abet" the release of "57K to 1.7M" new documents that had been "withheld for national security-public debate [sic]". <a href="http://pubads.g.doubleclick.net/gampad/jump?iu=/6978/reg_security/front&sz=300x250%7C300x600&tile=3&c=33U7RchawQrMoAAHIac14AAAKH&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" target="_blank"> <img src="http://pubads.g.doubleclick.net/gampad/ad?iu=/6978/reg_security/front&sz=300x250%7C300x600&tile=3&c=33U7RchawQrMoAAHIac14AAAKH&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" alt=""></a> The site clarified that will not be publishing the documents itself.Transparency activists would welcome such a release but such a move would be heavily criticised by inteligence agencies and military officials, who argue that Snowden's dump of secret documents has set US and allied (especially British) intelligence efforts back by years.
  • As things stand, the flow of Snowden disclosures is controlled by those who have access to the Sn‪o‬wden archive, which might possibly include Snowden confidants such as Glenn Greenwald and Laura Poitras. In some cases, even when these people release information to mainstream media organisations, it is then suppressed by these organisations after negotiation with the authorities. (In one such case, some key facts were later revealed by the Register.)"July is when war begins unless headed off by Snowden full release of crippling intel. After war begins not a chance of release," Cryptome tweeted on its official feed."Warmongerers are on a rampage. So, yes, citizens holding Snowden docs will do the right thing," it said.
  • "For more on Snowden docs release in July watch for Ellsberg, special guest and others at HOPE, July 18-20: http://www.hope.net/schedule.html," it added.HOPE (Hackers On Planet Earth) is a well-regarded and long-running hacking conference organised by 2600 magazine. Previous speakers at the event have included Kevin Mitnick, Steve Wozniak and Jello Biafra.In other developments, ‪Cryptome‬ has started a Kickstarter fund to release its entire archive in the form of a USB stick archive. It wants t‪o‬ raise $100,000 to help it achieve its goal. More than $14,000 has already been raised.The funding drive follows a dispute between ‪Cryptome‬ and its host Network Solutions, which is owned by web.com. Access to the site was bl‪o‬cked f‪o‬ll‪o‬wing a malware infection last week. ‪Cryptome‬ f‪o‬under J‪o‬hn Y‪o‬ung criticised the host, claiming it had ‪o‬ver-reacted and had been sl‪o‬w t‪o‬ rest‪o‬re access t‪o‬ the site, which ‪Cryptome‬ criticised as a form of cens‪o‬rship.In resp‪o‬nse, ‪Cryptome‬ plans to more widely distribute its content across multiple sites as well as releasing the planned USB stick archive. ®
  •  
    Can't happen soon enough. 
Paul Merrell

Verizon Injecting Perma-Cookies to Track Mobile Customers, Bypassing Privacy Controls |... - 0 views

  • Verizon users might want to start looking for another provider. In an effort to better serve advertisers, Verizon Wireless has been silently modifying its users' web traffic on its network to inject a cookie-like tracker. This tracker, included in an HTTP header called X-UIDH, is sent to every unencrypted website a Verizon customer visits from a mobile device. It allows third-party advertisers and websites to assemble a deep, permanent profile of visitors' web browsing habits without their consent.Verizon apparently created this mechanism to expand their advertising programs, but it has privacy implications far beyond those programs. Indeed, while we're concerned about Verizon's own use of the header, we're even more worried about what it allows others to find out about Verizon users. The X-UIDH header effectively reinvents the cookie, but does so in a way that is shockingly insecure and dangerous to your privacy. Worse still, Verizon doesn't let users turn off this "feature." In fact, it functions even if you use a private browsing mode or clear your cookies. You can test whether the header is injected in your traffic by visiting lessonslearned.org/sniff or amibeingtracked.com over a cell data connection.How X-UIDH Works, and Why It's a Problem
  • To compound the problem, the header also affects more than just web browsers. Mobile apps that send HTTP requests will also have the header inserted. This means that users' behavior in apps can be correlated with their behavior on the web, which would be difficult or impossible without the header. Verizon describes this as a key benefit of using their system. But Verizon bypasses the 'Limit Ad Tracking' settings in iOS and Android that are specifically intended to limit abuse of unique identifiers by mobile apps.
  • Because the header is injected at the network level, Verizon can add it to anyone using their towers, even those who aren't Verizon customers.
  • ...1 more annotation...
  • We're also concerned that Verizon's failure to permit its users to opt out of X-UIDH may be a violation of the federal law that requires phone companies to maintain the confidentiality of their customers' data. Only two months ago, the wireline sector of Verizon's business was hit with a $7.4 million fine by the Federal Communications Commission after it was caught using its "customers' personal information for thousands of marketing campaigns without even giving them the choice to opt out." With this header, it looks like Verizon lets its customers opt out of the marketing side of the program, but not from the disclosure of their browsing habits.
Paul Merrell

Facebook and Corporate "Friends" Threat Exchange? | nsnbc international - 0 views

  • Facebook teamed up with several corporate “friends” to adapt Facebook’s in-house software to identify cyber threats and their source with other corporations. Countering cyber threats sounds positive while there are serious questions about transparency when smaller, independent media fall victim to major corporation’s unwillingness to reveal the source of attacks resulted in websites being closed for hours or days. Transparency, yes, but for whom? Among the companies Facebook is teaming up with are Printerest, Tumblr, Twitter, Yahoo, Drpbox and Bit.ly, reports Susanne Posel at Occupy Corporatism. The stated goal of “Threat Exchange” is to locate malware, the source domains, the IP addresses which are involved as well as the nature of the malware itself.
  • While the platform may be useful for major corporations, who can afford buying the privilege to join the club, the initiative does little to nothing to protect smaller, independent media from being targeted with impunity. The development prompts the question “Cyber security for whom?” The question is especially pertinent because identifying a site as containing malware, whether it is correct or not, will result in the site being added to Google’s so-called “Safe Browsing List”.
  • An article written by nsnbc editor-in-chief Christof Lehmann entitled “Censorship Alert: The Alternative Media are getting harassed by the NSA” provides several examples which raise serious questions about the lack of transparency when independent media demand information about either real or alleged malware content on their media’s websites. An alleged malware content in a java script that had been inserted via the third-party advertising company MadAdsMedia resulted in the nsnbc website being closed down and added to Google’s Safe Browsing list. The response to nsnbc’s request to send detailed information about the alleged malware and most importantly, about the source, was rejected. MadAdsMedia’s response to a renewed request was to stop serving advertisements to nsnbc from one day to the other, stating that nsnbc could contact another company, YieldSelect, which is run by the same company. Shell Games? SiteLock, who partners with most western-based web hosting providers, including BlueHost, Hostgator and many others contacted nsnbc warning about an alleged malware threat. SiteLock refused to provide detailed information.
  • ...1 more annotation...
  • BlueHost refused to help the International Middle East Media Center (IMEMC)  during a Denial of Service DoS attack. Asked for help, BlueHost reportedly said that they should deal with the issue themselves, which was impossible without BlueHost’s cooperation. The news agency’s website was down for days because BlueHost reportedly just shut down IMEMC’s server and told the editor-in-chief, Saed Bannoura to “go somewhere else”. The question is whether “transparency” can be the privilege of major corporations or whether there is need for legislation that forces all corporations to provide detailed information that enables media and other internet users to pursue real or alleged malware threats, cyber attacks and so forth, criminally and legally. That is, also when the alleged or real threat involves major corporations.
Paul Merrell

WikiLeaks - Secret Trans-Pacific Partnership Agreement (TPP) - Investment Chapter - 0 views

  • WikiLeaks releases today the "Investment Chapter" from the secret negotiations of the TPP (Trans-Pacific Partnership) agreement. The document adds to the previous WikiLeaks publications of the chapters for Intellectual Property Rights (November 2013) and the Environment (January 2014). The TPP Investment Chapter, published today, is dated 20 January 2015. The document is classified and supposed to be kept secret for four years after the entry into force of the TPP agreement or, if no agreement is reached, for four years from the close of the negotiations. Julian Assange, WikiLeaks editor said: "The TPP has developed in secret an unaccountable supranational court for multinationals to sue states. This system is a challenge to parliamentary and judicial sovereignty. Similar tribunals have already been shown to chill the adoption of sane environmental protection, public health and public transport policies." Current TPP negotiation member states are the United States, Japan, Mexico, Canada, Australia, Malaysia, Chile, Singapore, Peru, Vietnam, New Zealand and Brunei. The TPP is the largest economic treaty in history, including countries that represent more than 40 per cent of the world´s GDP.
  • The Investment Chapter highlights the intent of the TPP negotiating parties, led by the United States, to increase the power of global corporations by creating a supra-national court, or tribunal, where foreign firms can "sue" states and obtain taxpayer compensation for "expected future profits". These investor-state dispute settlement (ISDS) tribunals are designed to overrule the national court systems. ISDS tribunals introduce a mechanism by which multinational corporations can force governments to pay compensation if the tribunal states that a country's laws or policies affect the company's claimed future profits. In return, states hope that multinationals will invest more. Similar mechanisms have already been used. For example, US tobacco company Phillip Morris used one such tribunal to sue Australia (June 2011 – ongoing) for mandating plain packaging of tobacco products on public health grounds; and by the oil giant Chevron against Ecuador in an attempt to evade a multi-billion-dollar compensation ruling for polluting the environment. The threat of future lawsuits chilled environmental and other legislation in Canada after it was sued by pesticide companies in 2008/9. ISDS tribunals are often held in secret, have no appeal mechanism, do not subordinate themselves to human rights laws or the public interest, and have few means by which other affected parties can make representations. The TPP negotiations have been ongoing in secrecy for five years and are now in their final stages. In the United States the Obama administration plans to "fast-track" the treaty through Congress without the ability of elected officials to discuss or vote on individual measures. This has met growing opposition as a result of increased public scrutiny following WikiLeaks' earlier releases of documents from the negotiations.
  • The TPP is set to be the forerunner to an equally secret agreement between the US and EU, the TTIP (Transatlantic Trade and Investment Partnership). Negotiations for the TTIP were initiated by the Obama administration in January 2013. Combined, the TPP and TTIP will cover more than 60 per cent of global GDP. The third treaty of the same kind, also negotiated in secrecy is TISA, on trade in services, including the financial and health sectors. It covers 50 countries, including the US and all EU countries. WikiLeaks released the secret draft text of the TISA's financial annex in June 2014. All these agreements on so-called “free trade” are negotiated outside the World Trade Organization's (WTO) framework. Conspicuously absent from the countries involved in these agreements are the BRICs countries of Brazil, Russia, India and China. Read the Secret Trans-Pacific Partnership Agreement (TPP) - Investment chapter
  •  
    The previously leaked chapter on copyrights makes clear that the TPP would be a disaster for a knowledge society. This chapter makes clear that only corprorations may compel arbitration; there is no corresponding right for human beings to do so. 
Paul Merrell

Victory for Users: Librarian of Congress Renews and Expands Protections for Fair Uses |... - 0 views

  • The new rules for exemptions to copyright's DRM-circumvention laws were issued today, and the Librarian of Congress has granted much of what EFF asked for over the course of months of extensive briefs and hearings. The exemptions we requested—ripping DVDs and Blurays for making fair use remixes and analysis; preserving video games and running multiplayer servers after publishers have abandoned them; jailbreaking cell phones, tablets, and other portable computing devices to run third party software; and security research and modification and repairs on cars—have each been accepted, subject to some important caveats.
  • The exemptions are needed thanks to a fundamentally flawed law that forbids users from breaking DRM, even if the purpose is a clearly lawful fair use. As software has become ubiquitous, so has DRM.  Users often have to circumvent that DRM to make full use of their devices, from DVDs to games to smartphones and cars. The law allows users to request exemptions for such lawful uses—but it doesn’t make it easy. Exemptions are granted through an elaborate rulemaking process that takes place every three years and places a heavy burden on EFF and the many other requesters who take part. Every exemption must be argued anew, even if it was previously granted, and even if there is no opposition. The exemptions that emerge are limited in scope. What is worse, they only apply to end users—the people who are actually doing the ripping, tinkering, jailbreaking, or research—and not to the people who make the tools that facilitate those lawful activities. The section of the law that creates these restrictions—the Digital Millennium Copyright Act's Section 1201—is fundamentally flawed, has resulted in myriad unintended consequences, and is long past due for reform or removal altogether from the statute books. Still, as long as its rulemaking process exists, we're pleased to have secured the following exemptions.
  • The new rules are long and complicated, and we'll be posting more details about each as we get a chance to analyze them. In the meantime, we hope each of these exemptions enable more exciting fair uses that educate, entertain, improve the underlying technology, and keep us safer. A better long-terms solution, though, is to eliminate the need for this onerous rulemaking process. We encourage lawmakers to support efforts like the Unlocking Technology Act, which would limit the scope of Section 1201 to copyright infringements—not fair uses. And as the White House looks for the next Librarian of Congress, who is ultimately responsible for issuing the exemptions, we hope to get a candidate who acts—as a librarian should—in the interest of the public's access to information.
« First ‹ Previous 61 - 80 of 101 Next › Last »
Showing 20 items per page