Skip to main content

Home/ Future of the Web/ Group items matching "recognize" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Paul Merrell

The Supreme Court's Groundbreaking Privacy Victory for the Digital Age | American Civil Liberties Union - 0 views

  • The Supreme Court on Friday handed down what is arguably the most consequential privacy decision of the digital age, ruling that police need a warrant before they can seize people’s sensitive location information stored by cellphone companies. The case specifically concerns the privacy of cellphone location data, but the ruling has broad implications for government access to all manner of information collected about people and stored by the purveyors of popular technologies. In its decision, the court rejects the government’s expansive argument that people lose their privacy rights merely by using those technologies. Carpenter v. U.S., which was argued by the ACLU, involves Timothy Carpenter, who was convicted in 2013 of a string of burglaries in Detroit. To tie Carpenter to the burglaries, FBI agents obtained — without seeking a warrant — months’ worth of his location information from Carpenter’s cellphone company. They got almost 13,000 data points tracking Carpenter’s whereabouts during that period, revealing where he slept, when he attended church, and much more. Indeed, as Chief Justice John Roberts wrote in Friday’s decision, “when the Government tracks the location of a cell phone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.”.
  • The ACLU argued the agents had violated Carpenter’s Fourth Amendment rights when they obtained such detailed records without a warrant based on probable cause. In a decision written by Chief Justice John Roberts, the Supreme Court agreed, recognizing that the Fourth Amendment must apply to records of such unprecedented breadth and sensitivity: Mapping a cell phone’s location over the course of 127 days provides an all-encompassing record of the holder’s whereabouts. As with GPS information, the timestamped data provides an intimate window into a person’s life, revealing not only his particular movements, but through them his ‘familial, political, professional, religious, and sexual associations.’
  • The government’s argument that it needed no warrant for these records extends far beyond cellphone location information, to any data generated by modern technologies and held by private companies rather than in our own homes or pockets. To make their case, government lawyers relied on an outdated, 1970s-era legal doctrine that says that once someone shares information with a “third party” — in Carpenter’s case, a cellphone company — that data is no longer protected by the Fourth Amendment. The Supreme Court made abundantly clear that this doctrine has its limits and cannot serve as a carte blanche for the government seizure of any data of its choosing without judicial oversight.
  • ...1 more annotation...
  • While the decision extends in the immediate term only to historical cellphone location data, the Supreme Court’s reasoning opens the door to the protection of the many other kinds of data generated by popular technologies. Today’s decision provides a groundbreaking update to privacy rights that the digital age has rendered vulnerable to abuse by the government’s appetite for surveillance. It recognizes that “cell phones and the services they provide are ‘such a pervasive and insistent part of daily life’ that carrying one is indispensable to participation in modern society.” And it helps ensure that we don’t have to give up those rights if we want to participate in modern life. 
Paul Merrell

German Parliament Says No More Software Patents | Electronic Frontier Foundation - 0 views

  • The German Parliament recently took a huge step that would eliminate software patents (PDF) when it issued a joint motion requiring the German government to ensure that computer programs are only covered by copyright. Put differently, in Germany, software cannot be patented. The Parliament's motion follows a similar announcement made by New Zealand's government last month (PDF), in which it determined that computer programs were not inventions or a manner of manufacture and, thus, cannot be patented.
  • The crux of the German Parliament's motion rests on the fact that software is already protected by copyright, and developers are afforded "exploitation rights." These rights, however, become confused when broad, abstract patents also cover general aspects of computer programs. These two intellectual property systems are at odds. The clearest example of this clash is with free software. The motion recognizes this issue and therefore calls upon the government "to preserve the precedence of copyright law so that software developers can also publish their work under open source license terms and conditions with legal security." The free software movement relies upon the fact that software can be released under a copyright license that allows users to share it and build upon others' works. Patents, as Parliament finds, inhibit this fundamental spread.
  • Just like in the New Zealand order, the German Parliament carved out one type of software that could be patented, when: the computer program serves merely as a replaceable equivalent for a mechanical or electro-mechanical component, as is the case, for instance, when software-based washing machine controls can replace an electromechanical program control unit consisting of revolving cylinders which activate the control circuits for the specific steps of the wash cycle This allows for software that is tied to (and controls part of) another invention to be patented. In other words, if a claimed process is purely a computer program, then it is not patentable. (New Zealand's order uses a similar washing machine example.) The motion ends by calling upon the German government to push for this approach to be standard across all of Europe. We hope policymakers in the United States will also consider fundamental reform that deals with the problems caused by low-quality software patents. Ultimately, any real reform must address this issue.
  •  
    Note that an unofficial translation of the parliamentary motion is linked from the article. This adds substantially to the pressure internationally to end software patents because Germany has been the strongest defender of software patents in Europe. The same legal grounds would not apply in the U.S. The strongest argument for the non-patentability in the U.S., in my opinion, is that software patents embody embody both prior art and obviousness. A general purpose computer can accomplish nothing unforeseen by the prior art of the computing device. And it is impossible for software to do more than cause different sequences of bit register states to be executed. This is the province of "skilled artisans" using known methods to produce predictable results. There is a long line of Supreme Court decisions holding that an "invention" with such traits is non-patentable. I have summarized that argument with citations at . 
Gonzalo San Gil, PhD.

Shazam Music Search Alternative For Linux - Freedom Penguin - 0 views

  •  
    "True to its name, instantmusic provides you with the ability to determine the name of a song/artist simply by providing some clues about the song. "
Paul Merrell

In Hearing on Internet Surveillance, Nobody Knows How Many Americans Impacted in Data Collection | Electronic Frontier Foundation - 0 views

  • The Senate Judiciary Committee held an open hearing today on the FISA Amendments Act, the law that ostensibly authorizes the digital surveillance of hundreds of millions of people both in the United States and around the world. Section 702 of the law, scheduled to expire next year, is designed to allow U.S. intelligence services to collect signals intelligence on foreign targets related to our national security interests. However—thanks to the leaks of many whistleblowers including Edward Snowden, the work of investigative journalists, and statements by public officials—we now know that the FISA Amendments Act has been used to sweep up data on hundreds of millions of people who have no connection to a terrorist investigation, including countless Americans. What do we mean by “countless”? As became increasingly clear in the hearing today, the exact number of Americans impacted by this surveillance is unknown. Senator Franken asked the panel of witnesses, “Is it possible for the government to provide an exact count of how many United States persons have been swept up in Section 702 surveillance? And if not the exact count, then what about an estimate?”
  • The lack of information makes rigorous oversight of the programs all but impossible. As Senator Franken put it in the hearing today, “When the public lacks even a rough sense of the scope of the government’s surveillance program, they have no way of knowing if the government is striking the right balance, whether we are safeguarding our national security without trampling on our citizens’ fundamental privacy rights. But the public can’t know if we succeed in striking that balance if they don’t even have the most basic information about our major surveillance programs."  Senator Patrick Leahy also questioned the panel about the “minimization procedures” associated with this type of surveillance, the privacy safeguard that is intended to ensure that irrelevant data and data on American citizens is swiftly deleted. Senator Leahy asked the panel: “Do you believe the current minimization procedures ensure that data about innocent Americans is deleted? Is that enough?”  David Medine, who recently announced his pending retirement from the Privacy and Civil Liberties Oversight Board, answered unequivocally:
  • Elizabeth Goitein, the Brennan Center director whose articulate and thought-provoking testimony was the highlight of the hearing, noted that at this time an exact number would be difficult to provide. However, she asserted that an estimate should be possible for most if not all of the government’s surveillance programs. None of the other panel participants—which included David Medine and Rachel Brand of the Privacy and Civil Liberties Oversight Board as well as Matthew Olsen of IronNet Cybersecurity and attorney Kenneth Wainstein—offered an estimate. Today’s hearing reaffirmed that it is not only the American people who are left in the dark about how many people or accounts are impacted by the NSA’s dragnet surveillance of the Internet. Even vital oversight committees in Congress like the Senate Judiciary Committee are left to speculate about just how far-reaching this surveillance is. It's part of the reason why we urged the House Judiciary Committee to demand that the Intelligence Community provide the public with a number. 
  • ...2 more annotations...
  • Senator Leahy, they don’t. The minimization procedures call for the deletion of innocent Americans’ information upon discovery to determine whether it has any foreign intelligence value. But what the board’s report found is that in fact information is never deleted. It sits in the databases for 5 years, or sometimes longer. And so the minimization doesn’t really address the privacy concerns of incidentally collected communications—again, where there’s been no warrant at all in the process… In the United States, we simply can’t read people’s emails and listen to their phone calls without court approval, and the same should be true when the government shifts its attention to Americans under this program. One of the most startling exchanges from the hearing today came toward the end of the session, when Senator Dianne Feinstein—who also sits on the Intelligence Committee—seemed taken aback by Ms. Goitein’s mention of “backdoor searches.” 
  • Feinstein: Wow, wow. What do you call it? What’s a backdoor search? Goitein: Backdoor search is when the FBI or any other agency targets a U.S. person for a search of data that was collected under Section 702, which is supposed to be targeted against foreigners overseas. Feinstein: Regardless of the minimization that was properly carried out. Goitein: Well the data is searched in its unminimized form. So the FBI gets raw data, the NSA, the CIA get raw data. And they search that raw data using U.S. person identifiers. That’s what I’m referring to as backdoor searches. It’s deeply concerning that any member of Congress, much less a member of the Senate Judiciary Committee and the Senate Intelligence Committee, might not be aware of the problem surrounding backdoor searches. In April 2014, the Director of National Intelligence acknowledged the searches of this data, which Senators Ron Wyden and Mark Udall termed “the ‘back-door search’ loophole in section 702.” The public was so incensed that the House of Representatives passed an amendment to that year's defense appropriations bill effectively banning the warrantless backdoor searches. Nonetheless, in the hearing today it seemed like Senator Feinstein might not recognize or appreciate the serious implications of allowing U.S. law enforcement agencies to query the raw data collected through these Internet surveillance programs. Hopefully today’s testimony helped convince the Senator that there is more to this topic than what she’s hearing in jargon-filled classified security briefings.
  •  
    The 4th Amendment: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and *particularly describing the place to be searched, and the* persons or *things to be seized."* So much for the particularized description of the place to be searched and the thngs to be seized.  Fah! Who needs a Constitution, anyway .... 
Yong Zhang

科技开讲:谷歌每秒赚1000美元背后的技术(2)_互联网_科技时代_新浪网 - 0 views

    • Yong Zhang
       
      东方文化对全局关系、背景的关注 East = Relationships; West = Individualistic  If you show people from the West a picture, they focus on a main or dominant foreground object, while people from East Asia pay more attention to context and background. East Asian people who grow up in the West show the Western pattern. "When shown complex, busy scenes, Asian-Americans and non-Asian-Americans recruited different brain regions. The Asians showed more activity in areas that process figure-ground relations-holistic context-while the Americans showed more activity in regions that recognize objects." How we see it: Culturally different eye movement patterns over visual scenes, Julie E. Boland, Hannah Faye Chua, & Richard E. Nisbett  Sharon Begley: West Brain, East Brain
Paul Merrell

Home - schema.org - 0 views

  • What is Schema.org? This site provides a collection of schemas, i.e., html tags, that webmasters can use to markup their pages in ways recognized by major search providers. Search engines including Bing, Google, Yahoo! and Yandex rely on this markup to improve the display of search results, making it easier for people to find the right web pages. Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML, it becomes very difficult to recover the original structured data. Many applications, especially search engines, can benefit greatly from direct access to this structured data. On-page markup enables search engines to understand the information on web pages and provide richer search results in order to make it easier for users to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure. A shared markup vocabulary makes it easier for webmasters to decide on a markup schema and get the maximum benefit for their efforts. So, in the spirit of sitemaps.org, search engines have come together to provide a shared collection of schemas that webmasters can use.
Gonzalo San Gil, PhD.

New Download Ban Won't Work, Politicians Say | TorrentFreak - 1 views

  •  
    " Andy on April 25, 2014 C: 130 Breaking A Dutch ban on the downloading of copyrighted material from unauthorized sources was cheered by the entertainment industries recently, but will fall short of achieving its aims. That's the opinion of several politicians who believe that only by providing better legal options will the situation improve. As they call for debate, a government spokesperson predicted that the ban will make it easier to chase down 'pirate' sites."
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

Facebook's Deepface Software Has Gotten Them in Deep Trouble | nsnbc international - 0 views

  • In a Chicago court, several Facebook users filed a class-action lawsuit against the social media giant for allegedly violating its users’ privacy rights to acquire the largest privately held stash of biometric face-recognition data in the world. The court documents reveal claims that “Facebook began violating the Illinois Biometric Information Privacy Act (IBIPA) of 2008 in 2010, in a purported attempt to make the process of tagging friends easier.”
  • This was accomplished through the “tag suggestions” feature provided by Facebook which “scans all pictures uploaded by users and identifies any Facebook friends they may want to tag.” The Facebook users maintain that this feature is a “form of data mining [that] violates user’s privacy”. One plaintiff said this is a “brazen disregard for its users’ privacy rights,” through which Facebook has “secretly amassed the world’s largest privately held database of consumer biometrics data.” Because “Facebook actively conceals” their protocol using “faceprint databases” to identify Facebook users in photos, and “doesn’t disclose its wholesale biometrics data collection practices in its privacy policies, nor does it even ask users to acknowledge them.”
  • This would be a violation of the IBIPA which states it is “unlawful to collect biometric data without written notice to the subject stating the purpose and length of the data collection, and without obtaining the subject’s written release.” Because all users are automatically part of the “faceprint’ facial recognition program, this is an illegal act in the state of Illinois, according to the complaint. Jay Edelson, attorney for the plaintiffs, asserts the opt-out ability to prevent other Facebook users from tagging them in photos is “insufficient”.
  • ...1 more annotation...
  • Deepface is the name of the new technology researchers at Facebook created in order to identify people in pictures; mimicking the way humans recognize the differences in each other’s faces. Facebook has already implemented facial recognition software (FRS) to suggest names for tagging photos; however Deepface can “identify faces from a side view” as well as when the person is directly facing the camera in the picture. In 2013, Erin Egan, chief privacy officer for Facebook, said that this upgrade “would give users better control over their personal information, by making it easier to identify posted photos in which they appear.” Egan explained: “Our goal is to facilitate tagging so that people know when there are photos of them on our service.” Facebook has stated that they retain information from their users that is syphoned from all across the web. This data is used to increase Facebook’s profits with the information being sold for marketing purposes. This is the impressive feature of Deepface; as previous FRS can only decipher faces in images that are frontal views of people. Shockingly, Deepface displays 97.25% accuracy in identifying faces in photos. That is quite a feat considering humans have a 97.53% accuracy rate. In order to ensure accuracy, Deepface “conducts its analysis based on more than 120 million different parameters.”
Paul Merrell

IDABC - Revision of the EIF and AG - 0 views

  • In 2006, the European Commission has started the revision of the European Interoperability Framework (EIF) and the Architecture Guidelines (AG).
  • The European Commission has started drafting the EIF v2.0 in close cooperation with the concerned Commission services and with the Members States as well as with the Candidate Countries and EEA Countries as observers.
  • A draft document from which the final EIF V2.0 will be elaborated was available for external comments till the 22nd September. The proposal for the new EIF v2.0 that has been subject to consultation, is available: [3508 Kb]
  •  
    This planning document forms the basis for the forthcoming work to develop European Interoperability Framework v. 2.0. It is the overview of things to come, so to speak. Well worth the read to see how SOA concepts are evolving at the bleeding edge. But also noteworthy for the faceted expansion in the definition of "interoperability," which now includes: [i] political context; [ii] legal interop; [iii] organizational interop; [iv] semantic interop; and [v] technical interop. A lot of people talk the interop talk; this is a document from people who are walking the interop walk, striving to bring order out of the chaos of incompatible ICT systems across the E.U.
  •  
    Full disclosure: I submitted detailed comments on the draft of the subject document on behalf of the Universal Interoperability Council. One theme of my comments was embraced in this document: the document recognizes human-machine interactions as a facet of interoperability, moving accessibility and usability from sideshow treatment in the draft to part of the technical interop dimension of the plan.
Paul Merrell

Open letter to Google: free VP8, and use it on YouTube - Free Software Foundation - 0 views

  •  
    With your purchase of On2, you now own both the world's largest video site (YouTube) and all the patents behind a new high performance video codec -- VP8. Just think what you can achieve by releasing the VP8 codec under an irrevocable royalty-free license and pushing it out to users on YouTube? You can end the web's dependence on patent-encumbered video formats and proprietary software (Flash). This ability to offer a free format on YouTube, however, is only a tiny fraction of your real leverage. The real party starts when you begin to encourage users' browsers to support free formats. There are lots of ways to do this. Our favorite would be for YouTube to switch from Flash to free formats and HTML, offering users with obsolete browsers a plugin or a new browser (free software, of course). Apple has had the mettle to ditch Flash on the iPhone and the iPad -- albeit for suspect reasons and using abhorrent methods (DRM) -- and this has pushed web developers to make Flash-free alternatives of their pages. You could do the same with YouTube, for better reasons, and it would be a death-blow to Flash's dominance in web video. If you care about free software and the free web (a movement and medium to which you owe your success) you must take bold action to replace Flash with free standards and free formats. Patented video codecs have already done untold harm to the web and its users, and this will continue until we stop it. Because patent-encumbered formats were costly to incorporate into browsers, a bloated, ill-suited piece of proprietary software (Flash) became the de facto standard for online video. Until we move to free formats, the threat of patent lawsuits and licensing fees hangs over every software developer, video creator, hardware maker, web site and corporation -- including you. You can use your purchase of On2 merely as a bargaining chip to achieve your own private solution to the problem, but that's both a cop-out and a strategic mistake. Without making VP
Paul Merrell

WG Review: Internet Wideband Audio Codec (codec) - 0 views

  • According to reports from developers of Internet audio applications and operators of Internet audio services, there are no standardized, high-quality audio codecs that meet all of the following three conditions: 1. Are optimized for use in interactive Internet applications. 2. Are published by a recognized standards development organization (SDO) and therefore subject to clear change control. 3. Can be widely implemented and easily distributed among application developers, service operators, and end users. There exist codecs that provide high quality encoding of audio information, but that are not optimized for the actual conditions of the Internet; according to reports, this mismatch between design and deployment has hindered adoption of such codecs in interactive Internet applications.
  • The goal of this working group is to develop a single high-quality audio codec that is optimized for use over the Internet and that can be widely implemented and easily distributed among application developers, service operators, and end users. Core technical considerations include, but are not necessarily limited to, the following: 1. Designing for use in interactive applications (examples include, but are not limited to, point-to-point voice calls, multi-party voice conferencing, telepresence, teleoperation, in-game voice chat, and live music performance) 2. Addressing the real transport conditions of the Internet as identified and prioritized by the working group 3. Ensuring interoperability with the Real-time Transport Protocol (RTP), including secure transport via SRTP 4. Ensuring interoperability with Internet signaling technologies such as Session Initiation Protocol (SIP), Session Description Protocol (SDP), and Extensible Messaging and Presence Protocol (XMPP); however, the result should not depend on the details of any particular signaling technology
Paul Merrell

Goggles turns Android into pocket translator - Google 24/7 - Fortune Tech - 2 views

  • Google's Goggles mobile application has always been a fun tool.  The idea is that if you snap a picture and upload it to Google (as well as your location/time), Google could present more about that object, and by extension, your surroundings.  It isn't always terribly accurate in identifying what is in the picture, but the results are sometimes helpful, if not amusing. Today, Goggles got a very specific use feature that will help travelers and readers of foreign language texts immensely.  Now you can point your Android camera at a sign, book, or any sort of foreign word, snap a picture,  and get a translation.  Google uses optical character recognition, or OCR, to turn the image into words, and then uses its translation services to turn those words into a language you recognize
Gonzalo San Gil, PhD.

Who Gives the Most Trusted Recommendations? - eMarketer - 1 views

  •  
    [ FEBRUARY 18, 2011 "People like me" vs. the experts Social media has put power in the hands of the consumer, giving everyone a publishing platform to push out their thoughts and feelings to the world at large. This has given great power to word-of-mouth, typically considered the most trustworthy form of marketing. But social behavior is changing as it matures. ]
  •  
    *We Do. 'They' Recognize It. Let's Realize The Power we Have and Use It. :)
Paul Merrell

UN Report Finds Mass Surveillance Violates International Treaties and Privacy Rights - The Intercept - 0 views

  • The United Nations’ top official for counter-terrorism and human rights (known as the “Special Rapporteur”) issued a formal report to the U.N. General Assembly today that condemns mass electronic surveillance as a clear violation of core privacy rights guaranteed by multiple treaties and conventions. “The hard truth is that the use of mass surveillance technology effectively does away with the right to privacy of communications on the Internet altogether,” the report concluded. Central to the Rapporteur’s findings is the distinction between “targeted surveillance” — which “depend[s] upon the existence of prior suspicion of the targeted individual or organization” — and “mass surveillance,” whereby “states with high levels of Internet penetration can [] gain access to the telephone and e-mail content of an effectively unlimited number of users and maintain an overview of Internet activity associated with particular websites.” In a system of “mass surveillance,” the report explained, “all of this is possible without any prior suspicion related to a specific individual or organization. The communications of literally every Internet user are potentially open for inspection by intelligence and law enforcement agencies in the States concerned.”
  • Mass surveillance thus “amounts to a systematic interference with the right to respect for the privacy of communications,” it declared. As a result, “it is incompatible with existing concepts of privacy for States to collect all communications or metadata all the time indiscriminately.” In concluding that mass surveillance impinges core privacy rights, the report was primarily focused on the International Covenant on Civil and Political Rights, a treaty enacted by the General Assembly in 1966, to which all of the members of the “Five Eyes” alliance are signatories. The U.S. ratified the treaty in 1992, albeit with various reservations that allowed for the continuation of the death penalty and which rendered its domestic law supreme. With the exception of the U.S.’s Persian Gulf allies (Saudi Arabia, UAE and Qatar), virtually every major country has signed the treaty. Article 17 of the Covenant guarantees the right of privacy, the defining protection of which, the report explained, is “that individuals have the right to share information and ideas with one another without interference by the State, secure in the knowledge that their communication will reach and be read by the intended recipients alone.”
  • The report’s key conclusion is that this core right is impinged by mass surveillance programs: “Bulk access technology is indiscriminately corrosive of online privacy and impinges on the very essence of the right guaranteed by article 17. In the absence of a formal derogation from States’ obligations under the Covenant, these programs pose a direct and ongoing challenge to an established norm of international law.” The report recognized that protecting citizens from terrorism attacks is a vital duty of every state, and that the right of privacy is not absolute, as it can be compromised when doing so is “necessary” to serve “compelling” purposes. It noted: “There may be a compelling counter-terrorism justification for the radical re-evaluation of Internet privacy rights that these practices necessitate. ” But the report was adamant that no such justifications have ever been demonstrated by any member state using mass surveillance: “The States engaging in mass surveillance have so far failed to provide a detailed and evidence-based public justification for its necessity, and almost no States have enacted explicit domestic legislation to authorize its use.”
  • ...5 more annotations...
  • Instead, explained the Rapporteur, states have relied on vague claims whose validity cannot be assessed because of the secrecy behind which these programs are hidden: “The arguments in favor of a complete abrogation of the right to privacy on the Internet have not been made publicly by the States concerned or subjected to informed scrutiny and debate.” About the ongoing secrecy surrounding the programs, the report explained that “states deploying this technology retain a monopoly of information about its impact,” which is “a form of conceptual censorship … that precludes informed debate.” A June report from the High Commissioner for Human Rights similarly noted “the disturbing lack of governmental transparency associated with surveillance policies, laws and practices, which hinders any effort to assess their coherence with international human rights law and to ensure accountability.” The rejection of the “terrorism” justification for mass surveillance as devoid of evidence echoes virtually every other formal investigation into these programs. A federal judge last December found that the U.S. Government was unable to “cite a single case in which analysis of the NSA’s bulk metadata collection actually stopped an imminent terrorist attack.” Later that month, President Obama’s own Review Group on Intelligence and Communications Technologies concluded that mass surveillance “was not essential to preventing attacks” and information used to detect plots “could readily have been obtained in a timely manner using conventional [court] orders.”
  • That principle — that the right of internet privacy belongs to all individuals, not just Americans — was invoked by NSA whistleblower Edward Snowden when he explained in a June, 2013 interview at The Guardian why he disclosed documents showing global surveillance rather than just the surveillance of Americans: “More fundamentally, the ‘US Persons’ protection in general is a distraction from the power and danger of this system. Suspicionless surveillance does not become okay simply because it’s only victimizing 95% of the world instead of 100%.” The U.N. Rapporteur was clear that these systematic privacy violations are the result of a union between governments and tech corporations: “States increasingly rely on the private sector to facilitate digital surveillance. This is not confined to the enactment of mandatory data retention legislation. Corporates [sic] have also been directly complicit in operationalizing bulk access technology through the design of communications infrastructure that facilitates mass surveillance. ”
  • The report was most scathing in its rejection of a key argument often made by American defenders of the NSA: that mass surveillance is justified because Americans are given special protections (the requirement of a FISA court order for targeted surveillance) which non-Americans (95% of the world) do not enjoy. Not only does this scheme fail to render mass surveillance legal, but it itself constitutes a separate violation of international treaties (emphasis added): The Special Rapporteur concurs with the High Commissioner for Human Rights that where States penetrate infrastructure located outside their territorial jurisdiction, they remain bound by their obligations under the Covenant. Moreover, article 26 of the Covenant prohibits discrimination on grounds of, inter alia, nationality and citizenship. The Special Rapporteur thus considers that States are legally obliged to afford the same privacy protection for nationals and non-nationals and for those within and outside their jurisdiction. Asymmetrical privacy protection regimes are a clear violation of the requirements of the Covenant.
  • Three Democratic Senators on the Senate Intelligence Committee wrote in The New York Times that “the usefulness of the bulk collection program has been greatly exaggerated” and “we have yet to see any proof that it provides real, unique value in protecting national security.” A study by the centrist New America Foundation found that mass metadata collection “has had no discernible impact on preventing acts of terrorism” and, where plots were disrupted, “traditional law enforcement and investigative methods provided the tip or evidence to initiate the case.” It labeled the NSA’s claims to the contrary as “overblown and even misleading.” While worthless in counter-terrorism policies, the UN report warned that allowing mass surveillance to persist with no transparency creates “an ever present danger of ‘purpose creep,’ by which measures justified on counter-terrorism grounds are made available for use by public authorities for much less weighty public interest purposes.” Citing the UK as one example, the report warned that, already, “a wide range of public bodies have access to communications data, for a wide variety of purposes, often without judicial authorization or meaningful independent oversight.”
  • The latest finding adds to the growing number of international formal rulings that the mass surveillance programs of the U.S. and its partners are illegal. In January, the European parliament’s civil liberties committee condemned such programs in “the strongest possible terms.” In April, the European Court of Justice ruled that European legislation on data retention contravened EU privacy rights. A top secret memo from the GCHQ, published last year by The Guardian, explicitly stated that one key reason for concealing these programs was fear of a “damaging public debate” and specifically “legal challenges against the current regime.” The report ended with a call for far greater transparency along with new protections for privacy in the digital age. Continuation of the status quo, it warned, imposes “a risk that systematic interference with the security of digital communications will continue to proliferate without any serious consideration being given to the implications of the wholesale abandonment of the right to online privacy.” The urgency of these reforms is underscored, explained the Rapporteur, by a conclusion of the United States Privacy and Civil Liberties Oversight Board that “permitting the government to routinely collect the calling records of the entire nation fundamentally shifts the balance of power between the state and its citizens.”
Gonzalo San Gil, PhD.

Pro-Privacy Senator Wyden on Fighting the NSA From Inside the System | WIRED - 1 views

  •  
    "Senator Ron Wyden thought he knew what was going on. The Democrat from Oregon, who has served on the Senate Select Committee on Intelligence since 2001, thought he knew the nature of the National Security Agency's surveillance activities. As a committee member with a classified clearance, he received regular briefings to conduct oversight."
  •  
    I'm a retired lawyer in Oregon and a devout civil libertarian. Wyden is one of my senators. I have been closely following this government digital surveillance stuff since the original articles in 1988 that first broke the story on the Five Eyes' Echelon surveillance system. E.g., http://goo.gl/mCxs6Y While I will grant that Wyden has bucked the system gently (he's far more a drag anchor than a propeller), he has shown no political courage on the NSA stuff whatsoever. In the linked article, he admits keeping his job as a Senator was more important to him than doing anything *effective* to stop the surveillance in its tracks. His "working from the inside" line notwithstanding, he allowed creation of a truly Orwellian state to develop without more than a few ineffective yelps that were never listened to because he lacked the courage to take a stand and bring down the house that NSA built with documentary evidence. It took a series of whistleblowers culminating in Edward Snowden's courageous willingness to spend the rest of his life in prison to bring the public to its currently educated state. Wyden on the other hand, didn't even have the courage to lay it all out in the public Congressional record when he could have done so at any time without risking more than his political career because of the Constitution's Speech and Debate Clause that absolutely protects Wyden from criminal prosecution had he done so. I don't buy arguments that fear of NSA blackmail can excuse politicians from doing their duty. That did not stop the Supreme Court from unanimously laying down an opinion, in Riley v. California, that brings to an end the line of case decisions based on Smith v. Maryland that is the underpinning of the NSA/DoJ position on access to phone metadata without a warrant. http://scholar.google.com/scholar_case?case=9647156672357738355 Elected and appointed government officials owe a duty to the citizens of this land to protect and defend the Constitution that legallh
Paul Merrell

What's Scarier: Terrorism, or Governments Blocking Websites in its Name? - The Intercept - 0 views

  • Forcibly taking down websites deemed to be supportive of terrorism, or criminalizing speech deemed to “advocate” terrorism, is a major trend in both Europe and the West generally. Last month in Brussels, the European Union’s counter-terrorism coordinator issued a memo proclaiming that “Europe is facing an unprecedented, diverse and serious terrorist threat,” and argued that increased state control over the Internet is crucial to combating it. The memo noted that “the EU and its Member States have developed several initiatives related to countering radicalisation and terrorism on the Internet,” yet argued that more must be done. It argued that the focus should be on “working with the main players in the Internet industry [a]s the best way to limit the circulation of terrorist material online.” It specifically hailed the tactics of the U.K. Counter-Terrorism Internet Referral Unit (CTIRU), which has succeeded in causing the removal of large amounts of material it deems “extremist”:
  • In addition to recommending the dissemination of “counter-narratives” by governments, the memo also urged EU member states to “examine the legal and technical possibilities to remove illegal content.” Exploiting terrorism fears to control speech has been a common practice in the West since 9/11, but it is becoming increasingly popular even in countries that have experienced exceedingly few attacks. A new extremist bill advocated by the right-wing Harper government in Canada (also supported by Liberal Party leader Justin Trudeau even as he recognizes its dangers) would create new crimes for “advocating terrorism”; specifically: “every person who, by communicating statements, knowingly advocates or promotes the commission of terrorism offences in general” would be a guilty and can be sent to prison for five years for each offense. In justifying the new proposal, the Canadian government admits that “under the current criminal law, it is [already] a crime to counsel or actively encourage others to commit a specific terrorism offence.” This new proposal is about criminalizing ideas and opinions. In the government’s words, it “prohibits the intentional advocacy or promotion of terrorism, knowing or reckless as to whether it would result in terrorism.”
  • If someone argues that continuous Western violence and interference in the Muslim world for decades justifies violence being returned to the West, or even advocates that governments arm various insurgents considered by some to be “terrorists,” such speech could easily be viewed as constituting a crime. To calm concerns, Canadian authorities point out that “the proposed new offence is similar to one recently enacted by Australia, that prohibits advocating a terrorist act or the commission of a terrorism offence-all while being reckless as to whether another person will engage in this kind of activity.” Indeed, Australia enacted a new law late last year that indisputably targets political speech and ideas, as well as criminalizing journalism considered threatening by the government. Punishing people for their speech deemed extremist or dangerous has been a vibrant practice in both the U.K. and U.S. for some time now, as I detailed (coincidentally) just a couple days before free speech marches broke out in the West after the Charlie Hebdo attacks. Those criminalization-of-speech attacks overwhelmingly target Muslims, and have resulted in the punishment of such classic free speech activities as posting anti-war commentary on Facebook, tweeting links to “extremist” videos, translating and posting “radicalizing” videos to the Internet, writing scholarly articles in defense of Palestinian groups and expressing harsh criticism of Israel, and even including a Hezbollah channel in a cable package.
  • ...2 more annotations...
  • Beyond the technical issues, trying to legislate ideas out of existence is a fool’s game: those sufficiently determined will always find ways to make themselves heard. Indeed, as U.S. pop star Barbra Streisand famously learned, attempts to suppress ideas usually result in the greatest publicity possible for their advocates and/or elevate them by turning fringe ideas into martyrs for free speech (I have zero doubt that all five of the targeted sites enjoyed among their highest traffic dates ever today as a result of the French targeting). But the comical futility of these efforts is exceeded by their profound dangers. Who wants governments to be able to unilaterally block websites? Isn’t the exercise of this website-blocking power what has long been cited as reasons we should regard the Bad Countries — such as China and Iran — as tyrannies (which also usually cite “counterterrorism” to justify their censorship efforts)?
  • s those and countless other examples prove, the concepts of “extremism” and “radicalizing” (like “terrorism” itself) are incredibly vague and elastic, and in the hands of those who wield power, almost always expand far beyond what you think it should mean (plotting to blow up innocent people) to mean: anyone who disseminates ideas that are threatening to the exercise of our power. That’s why powers justified in the name of combating “radicalism” or “extremism” are invariably — not often or usually, but invariably — applied to activists, dissidents, protesters and those who challenge prevailing orthodoxies and power centers. My arguments for distrusting governments to exercise powers of censorship are set forth here (in the context of a prior attempt by a different French minister to control the content of Twitter). In sum, far more damage has been inflicted historically by efforts to censor and criminalize political ideas than by the kind of “terrorism” these governments are invoking to justify these censorship powers. And whatever else may be true, few things are more inimical to, or threatening of, Internet freedom than allowing functionaries inside governments to unilaterally block websites from functioning on the ground that the ideas those sites advocate are objectionable or “dangerous.” That’s every bit as true when the censors are in Paris, London, and Ottawa, and Washington as when they are in Tehran, Moscow or Beijing.
Paul Merrell

Official Google Blog: A first step toward more global email - 0 views

  • Whether your email address is firstname.lastname@ or something more expressive like corgicrazy@, an email address says something about who you are. But from the start, email addresses have always required you to use non-accented Latin characters when signing up. Less than half of the world’s population has a mother tongue that uses the Latin alphabet. And even fewer people use only the letters A-Z. So if your name (or that of your favorite pet) contains accented characters (like “José Ramón”) or is written in another script like Chinese or Devanagari, your email address options are limited. But all that could change. In 2012, an organization called the Internet Engineering Task Force (IETF) created a new email standard that supports addresses with non-Latin and accented Latin characters (e.g. 武@メール.グーグル). In order for this standard to become a reality, every email provider and every website that asks you for your email address must adopt it. That’s obviously a tough hill to climb. The technology is there, but someone has to take the first step.
  • Today we're ready to be that someone. Starting now, Gmail (and shortly, Calendar) will recognize addresses that contain accented or non-Latin characters. This means Gmail users can send emails to, and receive emails from, people who have these characters in their email addresses. Of course, this is just a first step and there’s still a ways to go. In the future, we want to make it possible for you to use them to create Gmail accounts. Last month, we announced the addition of 13 new languages in Gmail. Language should never be a barrier when it comes to connecting with others and with this step forward, truly global email is now even closer to becoming a reality.
Paul Merrell

Cy Vance's Proposal to Backdoor Encrypted Devices Is Riddled With Vulnerabilities | Just Security - 0 views

  • Less than a week after the attacks in Paris — while the public and policymakers were still reeling, and the investigation had barely gotten off the ground — Cy Vance, Manhattan’s District Attorney, released a policy paper calling for legislation requiring companies to provide the government with backdoor access to their smartphones and other mobile devices. This is the first concrete proposal of this type since September 2014, when FBI Director James Comey reignited the “Crypto Wars” in response to Apple’s and Google’s decisions to use default encryption on their smartphones. Though Comey seized on Apple’s and Google’s decisions to encrypt their devices by default, his concerns are primarily related to end-to-end encryption, which protects communications that are in transit. Vance’s proposal, on the other hand, is only concerned with device encryption, which protects data stored on phones. It is still unclear whether encryption played any role in the Paris attacks, though we do know that the attackers were using unencrypted SMS text messages on the night of the attack, and that some of them were even known to intelligence agencies and had previously been under surveillance. But regardless of whether encryption was used at some point during the planning of the attacks, as I lay out below, prohibiting companies from selling encrypted devices would not prevent criminals or terrorists from being able to access unbreakable encryption. Vance’s primary complaint is that Apple’s and Google’s decisions to provide their customers with more secure devices through encryption interferes with criminal investigations. He claims encryption prevents law enforcement from accessing stored data like iMessages, photos and videos, Internet search histories, and third party app data. He makes several arguments to justify his proposal to build backdoors into encrypted smartphones, but none of them hold water.
  • Before addressing the major privacy, security, and implementation concerns that his proposal raises, it is worth noting that while an increase in use of fully encrypted devices could interfere with some law enforcement investigations, it will help prevent far more crimes — especially smartphone theft, and the consequent potential for identity theft. According to Consumer Reports, in 2014 there were more than two million victims of smartphone theft, and nearly two-thirds of all smartphone users either took no steps to secure their phones or their data or failed to implement passcode access for their phones. Default encryption could reduce instances of theft because perpetrators would no longer be able to break into the phone to steal the data.
  • Vance argues that creating a weakness in encryption to allow law enforcement to access data stored on devices does not raise serious concerns for security and privacy, since in order to exploit the vulnerability one would need access to the actual device. He considers this an acceptable risk, claiming it would not be the same as creating a widespread vulnerability in encryption protecting communications in transit (like emails), and that it would be cheap and easy for companies to implement. But Vance seems to be underestimating the risks involved with his plan. It is increasingly important that smartphones and other devices are protected by the strongest encryption possible. Our devices and the apps on them contain astonishing amounts of personal information, so much that an unprecedented level of harm could be caused if a smartphone or device with an exploitable vulnerability is stolen, not least in the forms of identity fraud and credit card theft. We bank on our phones, and have access to credit card payments with services like Apple Pay. Our contact lists are stored on our phones, including phone numbers, emails, social media accounts, and addresses. Passwords are often stored on people’s phones. And phones and apps are often full of personal details about their lives, from food diaries to logs of favorite places to personal photographs. Symantec conducted a study, where the company spread 50 “lost” phones in public to see what people who picked up the phones would do with them. The company found that 95 percent of those people tried to access the phone, and while nearly 90 percent tried to access private information stored on the phone or in other private accounts such as banking services and email, only 50 percent attempted contacting the owner.
  • ...8 more annotations...
  • Vance attempts to downplay this serious risk by asserting that anyone can use the “Find My Phone” or Android Device Manager services that allow owners to delete the data on their phones if stolen. However, this does not stand up to scrutiny. These services are effective only when an owner realizes their phone is missing and can take swift action on another computer or device. This delay ensures some period of vulnerability. Encryption, on the other hand, protects everyone immediately and always. Additionally, Vance argues that it is safer to build backdoors into encrypted devices than it is to do so for encrypted communications in transit. It is true that there is a difference in the threats posed by the two types of encryption backdoors that are being debated. However, some manner of widespread vulnerability will inevitably result from a backdoor to encrypted devices. Indeed, the NSA and GCHQ reportedly hacked into a database to obtain cell phone SIM card encryption keys in order defeat the security protecting users’ communications and activities and to conduct surveillance. Clearly, the reality is that the threat of such a breach, whether from a hacker or a nation state actor, is very real. Even if companies go the extra mile and create a different means of access for every phone, such as a separate access key for each phone, significant vulnerabilities will be created. It would still be possible for a malicious actor to gain access to the database containing those keys, which would enable them to defeat the encryption on any smartphone they took possession of. Additionally, the cost of implementation and maintenance of such a complex system could be high.
  • Privacy is another concern that Vance dismisses too easily. Despite Vance’s arguments otherwise, building backdoors into device encryption undermines privacy. Our government does not impose a similar requirement in any other context. Police can enter homes with warrants, but there is no requirement that people record their conversations and interactions just in case they someday become useful in an investigation. The conversations that we once had through disposable letters and in-person conversations now happen over the Internet and on phones. Just because the medium has changed does not mean our right to privacy has.
  • In addition to his weak reasoning for why it would be feasible to create backdoors to encrypted devices without creating undue security risks or harming privacy, Vance makes several flawed policy-based arguments in favor of his proposal. He argues that criminals benefit from devices that are protected by strong encryption. That may be true, but strong encryption is also a critical tool used by billions of average people around the world every day to protect their transactions, communications, and private information. Lawyers, doctors, and journalists rely on encryption to protect their clients, patients, and sources. Government officials, from the President to the directors of the NSA and FBI, and members of Congress, depend on strong encryption for cybersecurity and data security. There are far more innocent Americans who benefit from strong encryption than there are criminals who exploit it. Encryption is also essential to our economy. Device manufacturers could suffer major economic losses if they are prohibited from competing with foreign manufacturers who offer more secure devices. Encryption also protects major companies from corporate and nation-state espionage. As more daily business activities are done on smartphones and other devices, they may now hold highly proprietary or sensitive information. Those devices could be targeted even more than they are now if all that has to be done to access that information is to steal an employee’s smartphone and exploit a vulnerability the manufacturer was required to create.
  • Vance also suggests that the US would be justified in creating such a requirement since other Western nations are contemplating requiring encryption backdoors as well. Regardless of whether other countries are debating similar proposals, we cannot afford a race to the bottom on cybersecurity. Heads of the intelligence community regularly warn that cybersecurity is the top threat to our national security. Strong encryption is our best defense against cyber threats, and following in the footsteps of other countries by weakening that critical tool would do incalculable harm. Furthermore, even if the US or other countries did implement such a proposal, criminals could gain access to devices with strong encryption through the black market. Thus, only innocent people would be negatively affected, and some of those innocent people might even become criminals simply by trying to protect their privacy by securing their data and devices. Finally, Vance argues that David Kaye, UN Special Rapporteur for Freedom of Expression and Opinion, supported the idea that court-ordered decryption doesn’t violate human rights, provided certain criteria are met, in his report on the topic. However, in the context of Vance’s proposal, this seems to conflate the concepts of court-ordered decryption and of government-mandated encryption backdoors. The Kaye report was unequivocal about the importance of encryption for free speech and human rights. The report concluded that:
  • States should promote strong encryption and anonymity. National laws should recognize that individuals are free to protect the privacy of their digital communications by using encryption technology and tools that allow anonymity online. … States should not restrict encryption and anonymity, which facilitate and often enable the rights to freedom of opinion and expression. Blanket prohibitions fail to be necessary and proportionate. States should avoid all measures that weaken the security that individuals may enjoy online, such as backdoors, weak encryption standards and key escrows. Additionally, the group of intelligence experts that was hand-picked by the President to issue a report and recommendations on surveillance and technology, concluded that: [R]egarding encryption, the U.S. Government should: (1) fully support and not undermine efforts to create encryption standards; (2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and (3) increase the use of encryption and urge US companies to do so, in order to better protect data in transit, at rest, in the cloud, and in other storage.
  • The clear consensus among human rights experts and several high-ranking intelligence experts, including the former directors of the NSA, Office of the Director of National Intelligence, and DHS, is that mandating encryption backdoors is dangerous. Unaddressed Concerns: Preventing Encrypted Devices from Entering the US and the Slippery Slope In addition to the significant faults in Vance’s arguments in favor of his proposal, he fails to address the question of how such a restriction would be effectively implemented. There is no effective mechanism for preventing code from becoming available for download online, even if it is illegal. One critical issue the Vance proposal fails to address is how the government would prevent, or even identify, encrypted smartphones when individuals bring them into the United States. DHS would have to train customs agents to search the contents of every person’s phone in order to identify whether it is encrypted, and then confiscate the phones that are. Legal and policy considerations aside, this kind of policy is, at the very least, impractical. Preventing strong encryption from entering the US is not like preventing guns or drugs from entering the country — encrypted phones aren’t immediately obvious as is contraband. Millions of people use encrypted devices, and tens of millions more devices are shipped to and sold in the US each year.
  • Finally, there is a real concern that if Vance’s proposal were accepted, it would be the first step down a slippery slope. Right now, his proposal only calls for access to smartphones and devices running mobile operating systems. While this policy in and of itself would cover a number of commonplace devices, it may eventually be expanded to cover laptop and desktop computers, as well as communications in transit. The expansion of this kind of policy is even more worrisome when taking into account the speed at which technology evolves and becomes widely adopted. Ten years ago, the iPhone did not even exist. Who is to say what technology will be commonplace in 10 or 20 years that is not even around today. There is a very real question about how far law enforcement will go to gain access to information. Things that once seemed like merely science fiction, such as wearable technology and artificial intelligence that could be implanted in and work with the human nervous system, are now available. If and when there comes a time when our “smart phone” is not really a device at all, but is rather an implant, surely we would not grant law enforcement access to our minds.
  • Policymakers should dismiss Vance’s proposal to prohibit the use of strong encryption to protect our smartphones and devices in order to ensure law enforcement access. Undermining encryption, regardless of whether it is protecting data in transit or at rest, would take us down a dangerous and harmful path. Instead, law enforcement and the intelligence community should be working to alter their skills and tactics in a fast-evolving technological world so that they are not so dependent on information that will increasingly be protected by encryption.
Paul Merrell

European Parliament Urges Protection for Edward Snowden - The New York Times - 0 views

  • The European Parliament narrowly adopted a nonbinding but nonetheless forceful resolution on Thursday urging the 28 nations of the European Union to recognize Edward J. Snowden as a “whistle-blower and international human rights defender” and shield him from prosecution.On Twitter, Mr. Snowden, the former National Security Agency contractor who leaked millions of documents about electronic surveillance by the United States government, called the vote a “game-changer.” But the resolution has no legal force and limited practical effect for Mr. Snowden, who is living in Russia on a three-year residency permit.Whether to grant Mr. Snowden asylum remains a decision for the individual European governments, and none have done so thus far. Continue reading the main story Related Coverage Open Source: Now Following the N.S.A. on Twitter, @SnowdenSEPT. 29, 2015 Snowden Sees Some Victories, From a DistanceMAY 19, 2015 Still, the resolution was the strongest statement of support seen for Mr. Snowden from the European Parliament. At the same time, the close vote — 285 to 281 — suggested the extent to which some European lawmakers are wary of alienating the United States.
  • The resolution calls on European Union members to “drop any criminal charges against Edward Snowden, grant him protection and consequently prevent extradition or rendition by third parties.”In June 2013, shortly after Mr. Snowden’s leaks became public, the United States charged him with theft of government property and violations of the Espionage Act of 1917. By then, he had flown to Moscow, where he spent weeks in legal limbo before he was granted temporary asylum and, later, a residency permit.Four Latin American nations have offered him permanent asylum, but he does not believe he could travel from Russia to those countries without running the risk of arrest and extradition to the United States along the way.
  • The White House, which has used diplomatic efforts to discourage even symbolic resolutions of support for Mr. Snowden, immediately criticized the resolution.“Our position has not changed,” said Ned Price, a spokesman for the National Security Council in Washington.“Mr. Snowden is accused of leaking classified information and faces felony charges here in the United States. As such, he should be returned to the U.S. as soon as possible, where he will be accorded full due process.”Jan Philipp Albrecht, one of the lawmakers who sponsored the resolution in Europe, said it should increase pressure on national governments.
  • ...1 more annotation...
  • “It’s the first time a Parliament votes to ask for this to be done — and it’s the European Parliament,” Mr. Albrecht, a German lawmaker with the Greens political bloc, said in a phone interview shortly after the vote, which was held in Strasbourg, France. “So this has an impact surely on the debate in the member states.”The resolution “is asking or demanding the member states’ governments to end all the charges and to prevent any extradition to a third party,” Mr. Albrecht said. “That’s a very clear call, and that can’t be just ignored by the governments,” he said.
‹ Previous 21 - 40 of 46 Next ›
Showing 20 items per page