Skip to main content

Home/ Future of the Web/ Group items tagged order

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

California Supreme Court Shows How Pharma 'Pay For Delay' Can Violate Antitrust Laws | ... - 0 views

  •  
    "from the antitrust dept For many years now, we've been talking about the problematic practice of "pay for delay" in the pharma industry. This involved patent holders paying generic pharmaceutical makers some amount of money to not enter the market in order to keep their own monopoly even longer."
  •  
    "from the antitrust dept For many years now, we've been talking about the problematic practice of "pay for delay" in the pharma industry. This involved patent holders paying generic pharmaceutical makers some amount of money to not enter the market in order to keep their own monopoly even longer."
Paul Merrell

First Look Publishes Open Source Code To Advance Privacy, Security, and Journalism - Th... - 0 views

  • today we’re excited to contribute back to the open source community by launching First Look Code, the home for our own open source projects related to privacy, security, data, and journalism. To begin with, First Look Code is the new home for document sanitization software PDF Redact Tools, and we’ve launched a brand new anti-gag order project called AutoCanary.
  • AutoCanary A warrant canary is a regularly published statement that a company hasn’t received any legal orders that it’s not allowed to talk about, such as a national security letter. Canaries can help prevent web publishers from misleading visitors and prevent tech companies from misleading users when they share data with the government and are prevented from talking about it. One such situation arose — without a canary in place — in 2013, when the U.S. government sent Lavabit, a provider of encrypted email services apparently used by Snowden, a legal request to access Snowden’s email, thwarting some of the very privacy protections Lavabit had promised users. This request included a gag order, so the company was legally prohibited from talking about it. Rather than becoming “complicit in crimes against the American people,” in his words, Lavabit founder Ladar Levison, chose to shut down the service.
  • Warrant canaries are designed to help companies in this kind of situation. You can see a list of companies that publish warrant canary statements at Canary Watch. As of today, First Look Media is among the companies that publish canaries. We’re happy to announce the first version of AutoCanary, a desktop program for Windows, Mac OS X, and Linux that makes the process of generating machine-readable, digitally-signed warrant canary statements simpler. Read more about AutoCanary on its new website.
  •  
    The internet continues to fight back against the Dark State. On the unsettled nature of the law in regard to use of warrant canaries in the U.S. see EFF's faq: https://www.eff.org/deeplinks/2014/04/warrant-canary-faq (it needs a test case).
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

Google ordered to remove links to stories about Google removing links to stories | Ars ... - 1 views

  •  
    "Google faces fines if it does not comply with ridiculous recursion. by Glyn Moody (UK) - Aug 21, 2015 8:34pm CEST"
  •  
    "Google faces fines if it does not comply with ridiculous recursion. by Glyn Moody (UK) - Aug 21, 2015 8:34pm CEST"
Gonzalo San Gil, PhD.

MPAA Emails Expose Dirty Media Attack Against Google - TorrentFreak - 0 views

  •  
    "New emails have revealed how the MPAA and Mississippi Attorney General Jim Hood orchestrated a PR smear campaign against Google in order to push through their anti-piracy measures. "
  •  
    "New emails have revealed how the MPAA and Mississippi Attorney General Jim Hood orchestrated a PR smear campaign against Google in order to push through their anti-piracy measures. "
Gonzalo San Gil, PhD.

How to manage your passwords from the Linux command line - 0 views

  •  
    "Posted by Administrator | Mar 18, 2015 | Linux | 0 comments The authentication with passwords has been quite wide spread these days. This safety measure might be quite good for the security matter, but, eventually, consumers appear in a big need of password management method - a tool, a program or a clever technique - in order to save the used passwords during all of the processes. "
  •  
    "Posted by Administrator | Mar 18, 2015 | Linux | 0 comments The authentication with passwords has been quite wide spread these days. This safety measure might be quite good for the security matter, but, eventually, consumers appear in a big need of password management method - a tool, a program or a clever technique - in order to save the used passwords during all of the processes. "
Gonzalo San Gil, PhD.

The Linux Rain - Scripting a fancy chooser for recently used files - 0 views

  •  
    "By Bob Mesibov, published 17/08/2015 I recently scripted a GUI dialog that lists my 10 most recently modified files in reverse chronological order and allows me to choose more than 1 file for opening. The dialog is launched with a keyboard shortcut and is shown here with 2 files selected:"
  •  
    "By Bob Mesibov, published 17/08/2015 I recently scripted a GUI dialog that lists my 10 most recently modified files in reverse chronological order and allows me to choose more than 1 file for opening. The dialog is launched with a keyboard shortcut and is shown here with 2 files selected:"
Paul Merrell

Rapid - Press Releases - EUROPA - 0 views

  • The Commission found that Intel engaged in two specific forms of illegal practice. First, Intel gave wholly or partially hidden rebates to computer manufacturers on condition that they bought all, or almost all, their x86 CPUs from Intel. Intel also made direct payments to a major retailer on condition it stock only computers with Intel x86 CPUs. Such rebates and payments effectively prevented customers - and ultimately consumers - from choosing alternative products. Second, Intel made direct payments to computer manufacturers to halt or delay the launch of specific products containing competitors’ x86 CPUs and to limit the sales channels available to these products.
  • Intel awarded major computer manufacturers rebates on condition that they purchased all or almost all of their supplies, at least in certain defined segments, from Intel: Intel gave rebates to computer manufacturer A from December 2002 to December 2005 conditional on this manufacturer purchasing exclusively Intel CPUs Intel gave rebates to computer manufacturer B from November 2002 to May 2005 conditional on this manufacturer purchasing no less than 95% of its CPU needs for its business desktop computers from Intel (the remaining 5% that computer manufacturer B could purchase from rival chip maker AMD was then subject to further restrictive conditions set out below) Intel gave rebates to computer manufacturer C from October 2002 to November 2005 conditional on this manufacturer purchasing no less than 80% of its CPU needs for its desktop and notebook computers from Intel Intel gave rebates to computer manufacturer D in 2007 conditional on this manufacturer purchasing its CPU needs for its notebook computers exclusively from Intel.
  • Intel structured its pricing policy to ensure that a computer manufacturer which opted to buy AMD CPUs for that part of its needs that was open to competition would consequently lose the rebate (or a large part of it) that Intel provided for the much greater part of its needs for which the computer manufacturer had no choice but to buy from Intel. The computer manufacturer would therefore have to pay Intel a higher price for each of the units supplied for which the computer manufacturer had no alternative but to buy from Intel. In other words, should a computer manufacturer fail to purchase virtually all its x86 CPU requirements from Intel, it would forego the possibility of obtaining a significant rebate on any of its very high volumes of Intel purchases. Moreover, in order to be able to compete with the Intel rebates, for the part of the computer manufacturers' supplies that was up for grabs, a competitor that was just as efficient as Intel would have had to offer a price for its CPUs lower than its costs of producing those CPUs, even if the average price of its CPUs was lower than that of Intel.
  • ...5 more annotations...
  • In its decision, the Commission does not object to rebates in themselves but to the conditions Intel attached to those rebates.
  • Furthermore, Intel made payments to major retailer Media Saturn Holding from October 2002 to December 2007 on condition that it exclusively sold Intel-based PCs in all countries in which Media Saturn Holding is active.
  • For example, rival chip manufacturer AMD offered one million free CPUs to one particular computer manufacturer. If the computer manufacturer had accepted all of these, it would have lost Intel's rebate on its many millions of remaining CPU purchases, and would have been worse off overall simply for having accepted this highly competitive offer. In the end, the computer manufacturer took only 160,000 CPUs for free.
  • Intel also interfered directly in the relations between computer manufacturers and AMD. Intel awarded computer manufacturers payments - unrelated to any particular purchases from Intel - on condition that these computer manufacturers postponed or cancelled the launch of specific AMD-based products and/or put restrictions on the distribution of specific AMD-based products. The Commission found that these payments had the potential effect of preventing products for which there was a consumer demand from coming to the market. The Commission found the following specific cases: For the 5% of computer manufacturer B’s business that was not subject to the conditional rebate outlined above, Intel made further payments to computer manufacturer B provided that this manufacturer : sold AMD-based business desktops only to small and medium enterprises sold AMD-based business desktops only via direct distribution channels (as opposed to through distributors) and postponed the launch of its first AMD-based business desktop in Europe by 6 months. Intel made payments to computer manufacturer E provided that this manufacturer postponed the launch of an AMD-based notebook from September 2003 to January 2004. Before the conditional rebate to computer manufacturer D outlined above, Intel made payments to this manufacturer provided that it postponed the launch of AMD-based notebooks from September 2006 to the end of 2006.
  • The Commission obtained proof of the existence of many of the conditions found to be illegal in the antitrust decision even though they were not made explicit in Intel’s contracts. Such proof is based on a broad range of contemporaneous evidence such as e-mails obtained inter alia from unannounced on-site inspections, in responses to formal requests for information and in a number of formal statements made to the Commission by the other companies concerned. In addition, there is evidence that Intel had sought to conceal the conditions associated with its payments.
  •  
    This is an uncharacteristically strong press release from DG Competition. I still must read the order, but the description of the evidence is incredible, particularly the finding of concealment of its rebate conditions by Intel.
Paul Merrell

F.B.I. Director to Call 'Dark' Devices a Hindrance to Crime Solving in a Policy Speech ... - 0 views

  • In his first major policy speech as director of the F.B.I., James B. Comey on Thursday plans to wade deeper into the debate between law enforcement agencies and technology companies about new programs intended to protect personal information on communication devices.Mr. Comey will say that encryption technologies used on these devices, like the new iPhone, have become so sophisticated that crimes will go unsolved because law enforcement officers will not be able to get information from them, according to a senior F.B.I. official who provided a preview of the speech.The speech was prompted, in part, by the new encryption technology on the iPhone 6, which was released last month. The phone encrypts emails, photos and contacts, thwarting intelligence and law enforcement agencies, like the National Security Agency and F.B.I., from gaining access to it, even if they have court approval.
  • The F.B.I. has long had concerns about devices “going dark” — when technology becomes so sophisticated that the authorities cannot gain access. But now, Mr. Comey said he believes that the new encryption technology has evolved to the point that it will adversely affect crime solving.He will say in the speech that these new programs will most severely affect state and local law enforcement agencies, because they are the ones who most often investigate crimes like kidnappings and robberies in which getting information from electronic devices in a timely manner is essential to solving the crime.
  • They also do not have the resources that are available to the F.B.I. and other federal intelligence and law enforcement authorities in order to get around the programs.Mr. Comey will cite examples of crimes that the authorities were able to solve because they gained access to a phone.“He is going to call for a discussion on this issue and ask whether this is the path we want to go down,” said the senior F.B.I. official. “He is not going to accuse the companies of designing the technologies to prevent the F.B.I. from accessing them. But, he will say that this is a negative byproduct and we need to work together to fix it.”
  • ...2 more annotations...
  • Mr. Comey is scheduled to give the speech — titled “Going Dark: Are Technology, Privacy and Public Safety on a Collision Course?” — at the Brookings Institution in Washington.
  • In the interview that aired on “60 Minutes” on Sunday, Mr. Comey said that “the notion that we would market devices that would allow someone to place themselves beyond the law troubles me a lot.”He said that it was the equivalent of selling cars with trunks that could never be opened, even with a court order.“The notion that people have devices, again, that with court orders, based on a showing of probable cause in a case involving kidnapping or child exploitation or terrorism, we could never open that phone?” he said. “My sense is that we've gone too far when we've gone there.”
  •  
    I'm informed that Comey will also call for legislation outlawing communication by whispering because of technical difficulties in law enforcement monitoring of such communications. 
Gonzalo San Gil, PhD.

In Memory Of The Liberties Lost In The War on Piracy | TorrentFreak - 0 views

  •  
    " Rick Falkvinge on February 2, 2015 C: 0 Opinion In order to prevent us from discussing and sharing interesting things, the copyright industry has successfully eliminated civil liberties online. But it was all down to a wrong and stupid business assumption in the first place."
  •  
    " Rick Falkvinge on February 2, 2015 C: 0 Opinion In order to prevent us from discussing and sharing interesting things, the copyright industry has successfully eliminated civil liberties online. But it was all down to a wrong and stupid business assumption in the first place."
Gonzalo San Gil, PhD.

How to Limit the Network Bandwidth Used by Applications in a Linux System with Trickle - 0 views

  •  
    Trickle is a network bandwidth shaper tool that allows us to manage the upload and download speeds of applications in order to prevent any single one of them to
  •  
    Trickle is a network bandwidth shaper tool that allows us to manage the upload and download speeds of applications in order to prevent any single one of them to
Gonzalo San Gil, PhD.

Record Biz Wants To Tax Brits For Copying Their Own Music | TorrentFreak - 0 views

    • Gonzalo San Gil, PhD.
       
      # ! For when some exigencies to the recording industry … like diminishing prices and increasing, as much the quality of the works like the respect to the public…?
    • Gonzalo San Gil, PhD.
       
      # ! as if there weren't already enough taxes...
  •  
    [Several music industry organizations in the UK have launched an application for a judicial review after the government passed legislation allowing citizens to copy their own music for personal use. The group says that in order for the system to be fair, the public must pay a new tax. ...] # ! Definitely... # ! ... '#Music #watchmen' -th@se who persecute aficionad@s # ! just for #sharing- are 'watching' for everything BUT The Music... # ! Let's The #sharing #protect -effectively- the #Culture... (# ! perhaps, 'someone' thinks we don't pay enough taxes yet... # ! ...while Billions 'disappear' yearly from the public coffers....)
  •  
    [Several music industry organizations in the UK have launched an application for a judicial review after the government passed legislation allowing citizens to copy their own music for personal use. The group says that in order for the system to be fair, the public must pay a new tax. ...]
Gonzalo San Gil, PhD.

NSA - NoScript Anywhere - Next Generation Mobile NoScript for Android Smartphones and T... - 0 views

  •  
    "What's NoScript Mobile UI NoScript Anywhere (NSA) is the nickname for the next major iteration of the NoScript security add-on (NoScript 3.x), whose guts have been turned upside down in order to match Mozilla's Electrolysis multiprocessing architecture and implement a porting for Firefox Mobile, available on Android smartphones and tablets. " [# ! The Acronym... # ! ... it's just a coincidence... # ! ;) # ! The #App is #security for #Your (#opensource driven) #device...]
  •  
    "What's NoScript Mobile UI NoScript Anywhere (NSA) is the nickname for the next major iteration of the NoScript security add-on (NoScript 3.x), whose guts have been turned upside down in order to match Mozilla's Electrolysis multiprocessing architecture and implement a porting for Firefox Mobile, available on Android smartphones and tablets. "
Paul Merrell

Tell Congress: My Phone Calls are My Business. Reform the NSA. | EFF Action Center - 3 views

  • The USA PATRIOT Act granted the government powerful new spying capabilities that have grown out of control—but the provision that the FBI and NSA have been using to collect the phone records of millions of innocent people expires on June 1. Tell Congress: it’s time to rethink out-of-control spying. A vote to reauthorize Section 215 is a vote against the Constitution.
  • On June 5, 2013, the Guardian published a secret court order showing that the NSA has interpreted Section 215 to mean that, with the help of the FBI, it can collect the private calling records of millions of innocent people. The government could even try to use Section 215 for bulk collection of financial records. The NSA’s defenders argue that invading our privacy is the only way to keep us safe. But the White House itself, along with the President’s Review Board has said that the government can accomplish its goals without bulk telephone records collection. And the Privacy and Civil Liberties Oversight Board said, “We have not identified a single instance involving a threat to the United States in which [bulk collection under Section 215 of the PATRIOT Act] made a concrete difference in the outcome of a counterterrorism investigation.” Since June of 2013, we’ve continued to learn more about how out of control the NSA is. But what has not happened since June is legislative reform of the NSA. There have been myriad bipartisan proposals in Congress—some authentic and some not—but lawmakers didn’t pass anything. We need comprehensive reform that addresses all the ways the NSA has overstepped its authority and provides the NSA with appropriate and constitutional tools to keep America safe. In the meantime, tell Congress to take a stand. A vote against reauthorization of Section 215 is a vote for the Constitution.
  •  
    EFF has launched an email campagin to press members of Congress not to renew sectiion 215 of the Patriot Act when it expires on June 1, 2015.   Sectjon 215 authorizes FBI officials to "make an application for an order requiring the production of *any tangible things* (including books, records, papers, documents, and other items) for an investigation to obtain foreign intelligence information not concerning a United States person or to protect against international terrorism or clandestine intelligence activities, provided that such investigation of a United States person is not conducted solely upon the basis of activities protected by the first amendment to the Constitution." http://www.law.cornell.edu/uscode/text/50/1861 The section has been abused to obtain bulk collecdtion of all telephone records for the NSA's storage and processing.But the section goes farther and lists as specific examples of records that can be obtained under section 215's authority, "library circulation records, library patron lists, book sales records, book customer lists, firearms sales records, tax return records, educational records, or medical records."  Think of the NSA's voracious appetite for new "haystacks" it can store  and search in its gigantic new data center in Utah. Then ask yourself, "do I want the NSA to obtain all of my personal data, store it, and search it at will?" If your anser is "no," you might consider visiting this page to send your Congress critters an email urging them to vote against renewal of section 215 and to vote for other NSA reforms listed in the EFF sample email text. Please do not procrastinate. Do it now, before you forget. Every voice counts. 
Paul Merrell

This Is the Real Reason Apple Is Fighting the FBI | TIME - 0 views

  • The first thing to understand about Apple’s latest fight with the FBI—over a court order to help unlock the deceased San Bernardino shooter’s phone—is that it has very little to do with the San Bernardino shooter’s phone. It’s not even, really, the latest round of the Crypto Wars—the long running debate about how law enforcement and intelligence agencies can adapt to the growing ubiquity of uncrackable encryption tools. Rather, it’s a fight over the future of high-tech surveillance, the trust infrastructure undergirding the global software ecosystem, and how far technology companies and software developers can be conscripted as unwilling suppliers of hacking tools for governments. It’s also the public face of a conflict that will undoubtedly be continued in secret—and is likely already well underway.
  • Considered in isolation, the request seems fairly benign: If it were merely a question of whether to unlock a single device—even one unlikely to contain much essential evidence—there would probably be little enough harm in complying. The reason Apple CEO Tim Cook has pledged to fight a court’s order to assist the bureau is that he understands the danger of the underlying legal precedent the FBI is seeking to establish. Four important pieces of context are necessary to see the trouble with the Apple order.
Paul Merrell

Upgrade Your iPhone Passcode to Defeat the FBI's Backdoor Strategy - 0 views

  • It’s true that ordering Apple to develop the backdoor will fundamentally undermine iPhone security, as Cook and other digital security advocates have argued. But it’s possible for individual iPhone users to protect themselves from government snooping by setting strong passcodes on their phones — passcodes the FBI would not be able to unlock even if it gets its iPhone backdoor. The technical details of how the iPhone encrypts data, and how the FBI might circumvent this protection, are complex and convoluted, and are being thoroughly explored elsewhere on the internet. What I’m going to focus on here is how ordinary iPhone users can protect themselves. The short version: If you’re worried about governments trying to access your phone, set your iPhone up with a random, 11-digit numeric passcode. What follows is an explanation of why that will protect you and how to actually do it.
Paul Merrell

European Human Rights Court Deals a Heavy Blow to the Lawfulness of Bulk Surveillance |... - 0 views

  • In a seminal decision updating and consolidating its previous jurisprudence on surveillance, the Grand Chamber of the European Court of Human Rights took a sideways swing at mass surveillance programs last week, reiterating the centrality of “reasonable suspicion” to the authorization process and the need to ensure interception warrants are targeted to an individual or premises. The decision in Zakharov v. Russia — coming on the heels of the European Court of Justice’s strongly-worded condemnation in Schrems of interception systems that provide States with “generalised access” to the content of communications — is another blow to governments across Europe and the United States that continue to argue for the legitimacy and lawfulness of bulk collection programs. It also provoked the ire of the Russian government, prompting an immediate legislative move to give the Russian constitution precedence over Strasbourg judgments. The Grand Chamber’s judgment in Zakharov is especially notable because its subject matter — the Russian SORM system of interception, which includes the installation of equipment on telecommunications networks that subsequently enables the State direct access to the communications transiting through those networks — is similar in many ways to the interception systems currently enjoying public and judicial scrutiny in the United States, France, and the United Kingdom. Zakharov also provides a timely opportunity to compare the differences between UK and Russian law: Namely, Russian law requires prior independent authorization of interception measures, whereas neither the proposed UK law nor the existing legislative framework do.
  • The decision is lengthy and comprises a useful restatement and harmonization of the Court’s approach to standing (which it calls “victim status”) in surveillance cases, which is markedly different from that taken by the US Supreme Court. (Indeed, Judge Dedov’s separate but concurring opinion notes the contrast with Clapper v. Amnesty International.) It also addresses at length issues of supervision and oversight, as well as the role played by notification in ensuring the effectiveness of remedies. (Marko Milanovic discusses many of these issues here.) For the purpose of the ongoing debate around the legitimacy of bulk surveillance regimes under international human rights law, however, three particular conclusions of the Court are critical.
  • The Court took issue with legislation permitting the interception of communications for broad national, military, or economic security purposes (as well as for “ecological security” in the Russian case), absent any indication of the particular circumstances under which an individual’s communications may be intercepted. It said that such broadly worded statutes confer an “almost unlimited degree of discretion in determining which events or acts constitute such a threat and whether that threat is serious enough to justify secret surveillance” (para. 248). Such discretion cannot be unbounded. It can be limited through the requirement for prior judicial authorization of interception measures (para. 249). Non-judicial authorities may also be competent to authorize interception, provided they are sufficiently independent from the executive (para. 258). What is important, the Court said, is that the entity authorizing interception must be “capable of verifying the existence of a reasonable suspicion against the person concerned, in particular, whether there are factual indications for suspecting that person of planning, committing or having committed criminal acts or other acts that may give rise to secret surveillance measures, such as, for example, acts endangering national security” (para. 260). This finding clearly constitutes a significant threshold which a number of existing and pending European surveillance laws would not meet. For example, the existence of individualized reasonable suspicion runs contrary to the premise of signals intelligence programs where communications are intercepted in bulk; by definition, those programs collect information without any consideration of individualized suspicion. Yet the Court was clearly articulating the principle with national security-driven surveillance in mind, and with the knowledge that interception of communications in Russia is conducted by Russian intelligence on behalf of law enforcement agencies.
  • ...6 more annotations...
  • This element of the Grand Chamber’s decision distinguishes it from prior jurisprudence of the Court, namely the decisions of the Third Section in Weber and Saravia v. Germany (2006) and of the Fourth Section in Liberty and Ors v. United Kingdom (2008). In both cases, the Court considered legislative frameworks which enable bulk interception of communications. (In the German case, the Court used the term “strategic monitoring,” while it referred to “more general programmes of surveillance” in Liberty.) In the latter case, the Fourth Section sought to depart from earlier European Commission of Human Rights — the court of first instance until 1998 — decisions which developed the requirements of the law in the context of surveillance measures targeted at specific individuals or addresses. It took note of the Weber decision which “was itself concerned with generalized ‘strategic monitoring’, rather than the monitoring of individuals” and concluded that there was no “ground to apply different principles concerning the accessibility and clarity of the rules governing the interception of individual communications, on the one hand, and more general programmes of surveillance, on the other” (para. 63). The Court in Liberty made no mention of any need for any prior or reasonable suspicion at all.
  • In Weber, reasonable suspicion was addressed only at the post-interception stage; that is, under the German system, bulk intercepted data could be transmitted from the German Federal Intelligence Service (BND) to law enforcement authorities without any prior suspicion. The Court found that the transmission of personal data without any specific prior suspicion, “in order to allow the institution of criminal proceedings against those being monitored” constituted a fairly serious interference with individuals’ privacy rights that could only be remedied by safeguards and protections limiting the extent to which such data could be used (para. 125). (In the context of that case, the Court found that Germany’s protections and restrictions were sufficient.) When you compare the language from these three cases, it would appear that the Grand Chamber in Zakharov is reasserting the requirement for individualized reasonable suspicion, including in national security cases, with full knowledge of the nature of surveillance considered by the Court in its two recent bulk interception cases.
  • The requirement of reasonable suspicion is bolstered by the Grand Chamber’s subsequent finding in Zakharov that the interception authorization (e.g., the court order or warrant) “must clearly identify a specific person to be placed under surveillance or a single set of premises as the premises in respect of which the authorisation is ordered. Such identification may be made by names, addresses, telephone numbers or other relevant information” (para. 264). In making this finding, it references paragraphs from Liberty describing the broad nature of the bulk interception warrants under British law. In that case, it was this description that led the Court to find the British legislation possessed insufficient clarity on the scope or manner of exercise of the State’s discretion to intercept communications. In one sense, therefore, the Grand Chamber seems to be retroactively annotating the Fourth Section’s Liberty decision so that it might become consistent with its decision in Zakharov. Without this revision, the Court would otherwise appear to depart to some extent — arguably, purposefully — from both Liberty and Weber.
  • Finally, the Grand Chamber took issue with the direct nature of the access enjoyed by Russian intelligence under the SORM system. The Court noted that this contributed to rendering oversight ineffective, despite the existence of a requirement for prior judicial authorization. Absent an obligation to demonstrate such prior authorization to the communications service provider, the likelihood that the system would be abused through “improper action by a dishonest, negligent or overly zealous official” was quite high (para. 270). Accordingly, “the requirement to show an interception authorisation to the communications service provider before obtaining access to a person’s communications is one of the important safeguards against abuse by the law-enforcement authorities” (para. 269). Again, this requirement arguably creates an unconquerable barrier for a number of modern bulk interception systems, which rely on the use of broad warrants to authorize the installation of, for example, fiber optic cable taps that facilitate the interception of all communications that cross those cables. In the United Kingdom, the Independent Reviewer of Terrorism Legislation David Anderson revealed in his essential inquiry into British surveillance in 2015, there are only 20 such warrants in existence at any time. Even if these 20 warrants are served on the relevant communications service providers upon the installation of cable taps, the nature of bulk interception deprives this of any genuine meaning, making the safeguard an empty one. Once a tap is installed for the purposes of bulk interception, the provider is cut out of the equation and can no longer play the role the Court found so crucial in Zakharov.
  • The Zakharov case not only levels a serious blow at bulk, untargeted surveillance regimes, it suggests the Grand Chamber’s intention to actively craft European Court of Human Rights jurisprudence in a manner that curtails such regimes. Any suggestion that the Grand Chamber’s decision was issued in ignorance of the technical capabilities or intentions of States and the continued preference for bulk interception systems should be dispelled; the oral argument in the case took place in September 2014, at a time when the Court had already indicated its intention to accord priority to cases arising out of the Snowden revelations. Indeed, the Court referenced such forthcoming cases in the fact sheet it issued after the Zakharov judgment was released. Any remaining doubt is eradicated through an inspection of the multiple references to the Snowden revelations in the judgment itself. In the main judgment, the Court excerpted text from the Director of the European Union Agency for Human Rights discussing Snowden, and in the separate opinion issued by Judge Dedov, he goes so far as to quote Edward Snowden: “With each court victory, with every change in the law, we demonstrate facts are more convincing than fear. As a society, we rediscover that the value of the right is not in what it hides, but in what it protects.”
  • The full implications of the Zakharov decision remain to be seen. However, it is likely we will not have to wait long to know whether the Grand Chamber intends to see the demise of bulk collection schemes; the three UK cases (Big Brother Watch & Ors v. United Kingdom, Bureau of Investigative Journalism & Alice Ross v. United Kingdom, and 10 Human Rights Organisations v. United Kingdom) pending before the Court have been fast-tracked, indicating the Court’s willingness to continue to confront the compliance of bulk collection schemes with human rights law. It is my hope that the approach in Zakharov hints at the Court’s conviction that bulk collection schemes lie beyond the bounds of permissible State surveillance.
Paul Merrell

Microsoft Helping to Store Police Video From Taser Body Cameras | nsnbc international - 0 views

  • Microsoft has joined forces with Taser to combine the Azure cloud platform with law enforcement management tools.
  • Taser’s Axon body camera data management software on Evidence.com will run on Azure and Windows 10 devices to integrate evidence collection, analysis, and archival features as set forth by the Federal Bureau of Investigation Criminal Justice Information Services (CJIS) Security Policy. As per the partnership, Taser will utilize Azure’s machine learning and computing technologies to store police data on Microsoft’s government cloud. In addition, redaction capabilities of Taser will be improved which will assist police departments that are subject to bulk data requests. Currently, Taser is operating on Amazon Web Services; however this deal may entice police departments to upgrade their technology, which in turn would drive up sales of Windows 10. This partnership comes after Taser was given a lucrative deal with the Los Angeles Police Department (LAPD) last year, who ordered 7,000 body cameras equipped with 800 Axom body cameras for their officers in response to the recent deaths of several African Americans at the hands of police.
  • In order to ensure Taser maintains a monopoly on police body cameras, the corporation acquired contracts with police departments all across the nation for the purchase of body cameras through dubious ties to certain chiefs of police. The corporation announced in 2014 that “orders for body cameras [has] soared to $24.6 million from October to December” which represents a 5-fold increase in profits from 2013. Currently, Taser is in 13 cities with negotiations for new contracts being discussed in 28 more. Taser, according to records and interviews, allegedly has “financial ties to police chiefs whose departments have bought the recording devices.” In fact, Taser has been shown to provide airfare and luxury hotels for chiefs of police when traveling for speaking engagements in Australia and the United Arab Emirates (UAE); and hired them as consultants – among other perks and deals. Since 2013, Taser has been contractually bound with “consulting agreements with two such chiefs’ weeks after they retired” as well as is allegedly “in talks with a third who also backed the purchase of its products.”
Paul Merrell

Forget Apple vs. the FBI: WhatsApp Just Switched on Encryption for a Billion People | W... - 0 views

  • For most of the past six weeks, the biggest story out of Silicon Valley was Apple’s battle with the FBI over a federal order to unlock the iPhone of a mass shooter. The company’s refusal touched off a searing debate over privacy and security in the digital age. But this morning, at a small office in Mountain View, California, three guys made the scope of that enormous debate look kinda small. Mountain View is home to WhatsApp, an online messaging service now owned by tech giant Facebook, that has grown into one of the world’s most important applications. More than a billion people trade messages, make phone calls, send photos, and swap videos using the service. This means that only Facebook itself runs a larger self-contained communications network. And today, the enigmatic founders of WhatsApp, Brian Acton and Jan Koum, together with a high-minded coder and cryptographer who goes by the pseudonym Moxie Marlinspike, revealed that the company has added end-to-end encryption to every form of communication on its service.
  • This means that if any group of people uses the latest version of WhatsApp—whether that group spans two people or ten—the service will encrypt all messages, phone calls, photos, and videos moving among them. And that’s true on any phone that runs the app, from iPhones to Android phones to Windows phones to old school Nokia flip phones. With end-to-end encryption in place, not even WhatsApp’s employees can read the data that’s sent across its network. In other words, WhatsApp has no way of complying with a court order demanding access to the content of any message, phone call, photo, or video traveling through its service. Like Apple, WhatsApp is, in practice, stonewalling the federal government, but it’s doing so on a larger front—one that spans roughly a billion devices.
  • The FBI and the Justice Department declined to comment for this story. But many inside the government and out are sure to take issue with the company’s move. In late 2014, WhatsApp encrypted a portion of its network. In the months since, its service has apparently been used to facilitate criminal acts, including the terrorist attacks on Paris last year. According to The New York Times, as recently as this month, the Justice Department was considering a court case against the company after a wiretap order (still under seal) ran into WhatsApp’s end-to-end encryption. “The government doesn’t want to stop encryption,” says Joseph DeMarco, a former federal prosecutor who specializes in cybercrime and has represented various law enforcement agencies backing the Justice Department and the FBI in their battle with Apple. “But the question is: what do you do when a company creates an encryption system that makes it impossible for court-authorized search warrants to be executed? What is the reasonable level of assistance you should ask from that company?”
Paul Merrell

Facebook Says It Is Deleting Accounts at the Direction of the U.S. and Israeli Governments - 0 views

  • In September of last year, we noted that Facebook representatives were meeting with the Israeli government to determine which Facebook accounts of Palestinians should be deleted on the ground that they constituted “incitement.” The meetings — called for and presided over by one of the most extremist and authoritarian Israeli officials, pro-settlement Justice Minister Ayelet Shaked — came after Israel threatened Facebook that its failure to voluntarily comply with Israeli deletion orders would result in the enactment of laws requiring Facebook to do so, upon pain of being severely fined or even blocked in the country. The predictable results of those meetings are now clear and well-documented. Ever since, Facebook has been on a censorship rampage against Palestinian activists who protest the decades-long, illegal Israeli occupation, all directed and determined by Israeli officials. Indeed, Israeli officials have been publicly boasting about how obedient Facebook is when it comes to Israeli censorship orders
  • Facebook now seems to be explicitly admitting that it also intends to follow the censorship orders of the U.S. government.
  • What this means is obvious: that the U.S. government — meaning, at the moment, the Trump administration — has the unilateral and unchecked power to force the removal of anyone it wants from Facebook and Instagram by simply including them on a sanctions list. Does anyone think this is a good outcome? Does anyone trust the Trump administration — or any other government — to compel social media platforms to delete and block anyone it wants to be silenced? As the ACLU’s Jennifer Granick told the Times: It’s not a law that appears to be written or designed to deal with the special situations where it’s lawful or appropriate to repress speech. … This sanctions law is being used to suppress speech with little consideration of the free expression values and the special risks of blocking speech, as opposed to blocking commerce or funds as the sanctions was designed to do. That’s really problematic.
  • ...3 more annotations...
  • As is always true of censorship, there is one, and only one, principle driving all of this: power. Facebook will submit to and obey the censorship demands of governments and officials who actually wield power over it, while ignoring those who do not. That’s why declared enemies of the U.S. and Israeli governments are vulnerable to censorship measures by Facebook, whereas U.S and Israeli officials (and their most tyrannical and repressive allies) are not
  • All of this illustrates that the same severe dangers from state censorship are raised at least as much by the pleas for Silicon Valley giants to more actively censor “bad speech.” Calls for state censorship may often be well-intentioned — a desire to protect marginalized groups from damaging “hate speech” — yet, predictably, they are far more often used against marginalized groups: to censor them rather than protect them. One need merely look at how hate speech laws are used in Europe, or on U.S. college campuses, to see that the censorship victims are often critics of European wars, or activists against Israeli occupation, or advocates for minority rights.
  • It’s hard to believe that anyone’s ideal view of the internet entails vesting power in the U.S. government, the Israeli government, and other world powers to decide who may be heard on it and who must be suppressed. But increasingly, in the name of pleading with internet companies to protect us, that’s exactly what is happening.
« First ‹ Previous 41 - 60 of 219 Next › Last »
Showing 20 items per page