Skip to main content

Home/ Future of the Web/ Group items matching "an" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. and, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. and FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. an ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. and then there is the book’s content, represented in XHTML. an ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. and the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation And justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, And then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of An afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

From Radio to Porn, British Spies Track Web Users' Online Identities - 1 views

  • HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the Internet.” Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media and news websites, search engines, chat forums, and blogs. The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into Internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
  • Amid a renewed push from the U.K. government for more surveillance powers, more than two dozen documents being disclosed today by The Intercept reveal for the first time several major strands of GCHQ’s existing electronic eavesdropping capabilities.
  • The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails and Internet browsing logs of Brits, Americans, and any other citizens — all without a court order or judicial warrant
  • ...17 more annotations...
  • A huge volume of the Internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis. Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events” — a term the agency uses to refer to metadata records — with about 10 billion new entries added every day. As of March 2009, the largest slice of data Black Hole held — 41 percent — was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.
  • Throughout this period, as smartphone sales started to boom, the frequency of people’s Internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data. By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it said would be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.”
  • A document from the GCHQ target analysis center (GTAC) shows the Black Hole repository’s structure.
  • The data is searched by GCHQ analysts in a hunt for behavior online that could be connected to terrorism or other criminal activity. But it has also served a broader and more controversial purpose — helping the agency hack into European companies’ computer networks. In the lead up to its secret mission targeting Netherlands-based Gemalto, the largest SIM card manufacturer in the world, GCHQ used MUTanT BROTH in an effort to identify the company’s employees so it could hack into their computers. The system helped the agency analyze intercepted Facebook cookies it believed were associated with Gemalto staff located at offices in France and Poland. GCHQ later successfully infiltrated Gemalto’s internal networks, stealing encryption keys produced by the company that protect the privacy of cell phone communications.
  • Similarly, MUTANT BROTH proved integral to GCHQ’s hack of BelgiAN telecommunications provider Belgacom. The agency entered IP addresses associated with Belgacom into MUTANT BROTH to uncover information about the compANy’s employees. Cookies associated with the IPs revealed the Google, Yahoo, ANd LinkedIn accounts of three Belgacom engineers, whose computers were then targeted by the agency ANd infected with malware. The hacking operation resulted in GCHQ gaining deep access into the most sensitive parts of Belgacom’s internal systems, grANting British spies the ability to intercept communications passing through the compANy’s networks.
  • In March, a U.K. parliamentary committee published the findings of an 18-month review of GCHQ’s operations and called for an overhaul of the laws that regulate the spying. The committee raised concerns about the agency gathering what it described as “bulk personal datasets” being held about “a wide range of people.” However, it censored the section of the report describing what these “datasets” contained, despite acknowledging that they “may be highly intrusive.” The Snowden documents shine light on some of the core GCHQ bulk data-gathering programs that the committee was likely referring to — pulling back the veil of secrecy that has shielded some of the agency’s most controversial surveillance operations from public scrutiny. KARMA POLICE and MUTanT BROTH are among the key bulk collection systems. But they do not operate in isolation — and the scope of GCHQ’s spying extends far beyond them.
  • The agency operates a bewildering array of other eavesdropping systems, each serving its own specific purpose and designated a unique code name, such as: SOCIAL anTHROPOID, which is used to analyze metadata on emails, instant messenger chats, social media connections and conversations, plus “telephony” metadata about phone calls, cell phone locations, text and multimedia messages; MEMORY HOLE, which logs queries entered into search engines and associates each search with an IP address; MARBLED GECKO, which sifts through details about searches people have entered into Google Maps and Google Earth; and INFINITE MONKEYS, which analyzes data about the usage of online bulletin boards and forums. GCHQ has other programs that it uses to analyze the content of intercepted communications, such as the full written body of emails and the audio of phone calls. One of the most important content collection capabilities is TEMPORA, which mines vast amounts of emails, instant messages, voice calls and other communications and makes them accessible through a Google-style search tool named XKEYSCORE.
  • As of September 2012, TEMPORA was collecting “more than 40 billion pieces of content a day” and it was being used to spy on people across Europe, the Middle East, and North Africa, according to a top-secret memo outlining the scope of the program. The existence of TEMPORA was first revealed by The Guardian in June 2013. To analyze all of the communications it intercepts and to build a profile of the individuals it is monitoring, GCHQ uses a variety of different tools that can pull together all of the relevant information and make it accessible through a single interface. SAMUEL PEPYS is one such tool, built by the British spies to analyze both the content and metadata of emails, browsing sessions, and instant messages as they are being intercepted in real time. One screenshot of SAMUEL PEPYS in action shows the agency using it to monitor an individual in Sweden who visited a page about GCHQ on the U.S.-based anti-secrecy website Cryptome.
  • Partly due to the U.K.’s geographic location — situated between the United States and the western edge of continental Europe — a large amount of the world’s Internet traffic passes through its territory across international data cables. In 2010, GCHQ noted that what amounted to “25 percent of all Internet traffic” was transiting the U.K. through some 1,600 different cables. The agency said that it could “survey the majority of the 1,600” and “select the most valuable to switch into our processing systems.”
  • According to Joss Wright, a research fellow at the University of Oxford’s Internet Institute, tapping into the cables allows GCHQ to monitor a large portion of foreign communications. But the cables also transport masses of wholly domestic British emails and online chats, because when anyone in the U.K. sends an email or visits a website, their computer will routinely send and receive data from servers that are located overseas. “I could send a message from my computer here [in England] to my wife’s computer in the next room and on its way it could go through the U.S., France, and other countries,” Wright says. “That’s just the way the Internet is designed.” In other words, Wright adds, that means “a lot” of British data and communications transit across international cables daily, and are liable to be swept into GCHQ’s databases.
  • A map from a classified GCHQ presentation about intercepting communications from undersea cables. GCHQ is authorized to conduct dragnet surveillance of the international data cables through so-called external warrants that are signed off by a government minister. The external warrants permit the agency to monitor communications in foreign countries as well as British citizens’ international calls and emails — for example, a call from Islamabad to London. They prohibit GCHQ from reading or listening to the content of “internal” U.K. to U.K. emails and phone calls, which are supposed to be filtered out from GCHQ’s systems if they are inadvertently intercepted unless additional authorization is granted to scrutinize them. However, the same rules do not apply to metadata. A little-known loophole in the law allows GCHQ to use external warrants to collect and analyze bulk metadata about the emails, phone calls, and Internet browsing activities of British people, citizens of closely allied countries, and others, regardless of whether the data is derived from domestic U.K. to U.K. communications and browsing sessions or otherwise. In March, the existence of this loophole was quietly acknowledged by the U.K. parliamentary committee’s surveillance review, which stated in a section of its report that “special protection and additional safeguards” did not apply to metadata swept up using external warrants and that domestic British metadata could therefore be lawfully “returned as a result of searches” conducted by GCHQ.
  • Perhaps unsurprisingly, GCHQ appears to have readily exploited this obscure legal technicality. Secret policy guidance papers issued to the agency’s analysts instruct them that they can sift through huge troves of indiscriminately collected metadata records to spy on anyone regardless of their nationality. The guidance makes clear that there is no exemption or extra privacy protection for British people or citizens from countries that are members of the Five Eyes, a surveillance alliance that the U.K. is part of alongside the U.S., Canada, Australia, and New Zealand. “If you are searching a purely Events only database such as MUTanT BROTH, the issue of location does not occur,” states one internal GCHQ policy document, which is marked with a “last modified” date of July 2012. The document adds that analysts are free to search the databases for British metadata “without further authorization” by inputing a U.K. “selector,” meaning a unique identifier such as a person’s email or IP address, username, or phone number. Authorization is “not needed for individuals in the U.K.,” another GCHQ document explains, because metadata has been judged “less intrusive than communications content.” All the spies are required to do to mine the metadata troves is write a short “justification” or “reason” for each search they conduct and then click a button on their computer screen.
  • Intelligence GCHQ collects on British persons of interest is shared with domestic security agency MI5, which usually takes the lead on spying operations within the U.K. MI5 conducts its own extensive domestic surveillance as part of a program called DIGINT (digital intelligence).
  • GCHQ’s documents suggest that it typically retains metadata for periods of between 30 days to six months. It stores the content of communications for a shorter period of time, varying between three to 30 days. The retention periods can be extended if deemed necessary for “cyber defense.” One secret policy paper dated from January 2010 lists the wide range of information the agency classes as metadata — including location data that could be used to track your movements, your email, instant messenger, and social networking “buddy lists,” logs showing who you have communicated with by phone or email, the passwords you use to access “communications services” (such as an email account), and information about websites you have viewed.
  • Records showing the full website addresses you have visited — for instance, www.gchq.gov.uk/what_we_do — are treated as content. But the first part of an address you have visited — for instance, www.gchq.gov.uk — is treated as metadata. In isolation, a single metadata record of a phone call, email, or website visit may not reveal much about a person’s private life, according to Ethan Zuckerman, director of Massachusetts Institute of Technology’s Center for Civic Media. But if accumulated and analyzed over a period of weeks or months, these details would be “extremely personal,” he told The Intercept, because they could reveal a person’s movements, habits, religious beliefs, political views, relationships, and even sexual preferences. For Zuckerman, who has studied the social and political ramifications of surveillance, the most concerning aspect of large-scale government data collection is that it can be “corrosive towards democracy” — leading to a chilling effect on freedom of expression and communication. “Once we know there’s a reasonable chance that we are being watched in one fashion or another it’s hard for that not to have a ‘panopticon effect,’” he said, “where we think and behave differently based on the assumption that people may be watching and paying attention to what we are doing.”
  • When compared to surveillance rules in place in the U.S., GCHQ notes in one document that the U.K. has “a light oversight regime.” The more lax British spying regulations are reflected in secret internal rules that highlight greater restrictions on how NSA databases can be accessed. The NSA’s troves can be searched for data on British citizens, one document states, but they cannot be mined for information about Americans or other citizens from countries in the Five Eyes alliance. No such constraints are placed on GCHQ’s own databases, which can be sifted for records on the phone calls, emails, and Internet usage of Brits, Americans, and citizens from any other country. The scope of GCHQ’s surveillance powers explain in part why Snowden told The Guardian in June 2013 that U.K. surveillance is “worse than the U.S.” In an interview with Der Spiegel in July 2013, Snowden added that British Internet cables were “radioactive” and joked: “Even the Queen’s selfies to the pool boy get logged.”
  • In recent years, the biggest barrier to GCHQ’s mass collection of data does not appear to have come in the form of legal or policy restrictions. Rather, it is the increased use of encryption technology that protects the privacy of communications that has posed the biggest potential hindrance to the agency’s activities. “The spread of encryption … threatens our ability to do effective target discovery/development,” says a top-secret report co-authored by an official from the British agency and an NSA employee in 2011. “Pertinent metadata events will be locked within the encrypted channels and difficult, if not impossible, to prise out,” the report says, adding that the agencies were working on a plan that would “(hopefully) allow our Internet Exploitation strategy to prevail.”
Paul Merrell

Reset The Net - Privacy Pack - 1 views

  • This June 5th, I pledge to take strong steps to protect my freedom from government mass surveillance. I expect the services I use to do the same.
  • Fight for the Future and Center for Rights will contact you about future campaigns. Privacy Policy
  •  
    I wound up joining this campaign at the urging of the ACLU after checking the Privacy Policy. The Reset the Net campaign seems to be endorsed by a lot of change-oriented groups, from the ACLU to Greenpeac to the Pirate Party. A fair number of groups with a Progressive agenda, but certainly not limited to them. The right answer to that situation is to urge other groups to endorse, not to avoid the campaign. Single-issue coalition-building is all about focusing on an area of agreement rather than worrying about who you are rubbing elbows with.  I have been looking for a a bipartisan group that's tackling government surveillance issues via mass actions but has no corporate sponsors. This might be the one. The reason: Corporate types like Google have no incentive to really butt heads with the government voyeurs. They are themselves engaged in massive surveillance of their users and certainly will not carry the battle for digital privacy over to the private sector. But this *is* a battle over digital privacy and legally defining user privacy rights in the private sector is just as important as cutting back on government surveillance. As we have learned through the Snowden disclosures, what the private internet companies have, the NSA can and does get.  The big internet services successfully pushed in the U.S. for authorization to publish more numbers about how many times they pass private data to the government, but went no farther. They wanted to be able to say they did something, but there's a revolving door of staffers between NSA and the big internet companies and the internet service companies' data is an open book to the NSA.   The big internet services are not champions of their users' privacy. If they were, they would be featuring end-to-end encryption with encryption keys unique to each user and unknown to the companies.  Like some startups in Europe are doing. E.g., the Wuala.com filesync service in Switzerland (first 5 GB of storage free). Compare tha
  •  
    "This June 5th, I pledge to take strong steps to protect my freedom from government mass surveillance. I expect the services I use to do the same."
  •  
    I wound up joining this campaign at the urging of the ACLU after checking the Privacy Policy. The Reset the Net campaign seems to be endorsed by a lot of change-oriented groups, from the ACLU to Greenpeac to the Pirate Party. A fair number of groups with a Progressive agenda, but certainly not limited to them. The right answer to that situation is to urge other groups to endorse, not to avoid the campaign. Single-issue coalition-building is all about focusing on an area of agreement rather than worrying about who you are rubbing elbows with.  I have been looking for a a bipartisan group that's tackling government surveillance issues via mass actions but has no corporate sponsors. This might be the one. The reason: Corporate types like Google have no incentive to really butt heads with the government voyeurs. They are themselves engaged in massive surveillance of their users and certainly will not carry the battle for digital privacy over to the private sector. But this *is* a battle over digital privacy and legally defining user privacy rights in the private sector is just as important as cutting back on government surveillance. As we have learned through the Snowden disclosures, what the private internet companies have, the NSA can and does get.  The big internet services successfully pushed in the U.S. for authorization to publish more numbers about how many times they pass private data to the government, but went no farther. They wanted to be able to say they did something, but there's a revolving door of staffers between NSA and the big internet companies and the internet service companies' data is an open book to the NSA.   The big internet services are not champions of their users' privacy. If they were, they would be featuring end-to-end encryption with encryption keys unique to each user and unknown to the companies.  Like some startups in Europe are doing. E.g., the Wuala.com filesync service in Switzerland (first 5 GB of storage free). Com
Gary Edwards

Google's ARC Beta runs Android apps on Chrome OS, Windows, Mac, And Linux | Ars Technica - 0 views

  • So calling all developers: You can now (probably, maybe) run your android apps on just about anything—android, Chrome OS, Windows, Mac, and Linux—provided you fiddle with the ARC Welder and submit your app to the Chrome Web Store.
  • The App Runtime for Chrome and Native Client are hugely important projects because they potentially allow Google to push a "universal binary" strategy on developers. "Write your app for android, and we'll make it run on almost every popular OS! (other than iOS)" Google Play Services support is a major improvement for ARC and signals just how ambitious this project is. Some day it will be a great sales pitch to convince developers to write for android first, which gives them apps on all these desktop OSes for free.
  •  
    Thanks Marbux. ARC appears to be an extraordinary technology. Funny but Florian has been pushing Native Client (NaCL) since it was first ported from Firefox to Chrome. Looks like he was right. "In September, Google launched ARC-the "App Runtime for Chrome,"-a project that allowed android apps to run on Chrome OS. A few days later, a hack revealed the project's full potential: it enabled ARC on every "desktop" version of Chrome, meaning you could unofficially run android apps on Chrome OS, Windows, Mac OS X, and Linux. ARC made android apps run on nearly every computing platform (save iOS). ARC is an early beta though so Google has kept the project's reach very limited-only a handful of apps have been ported to ARC, which have all been the result of close collaborations between Google and the app developer. Now though, Google is taking two big steps forward with the latest developer preview: it's allowing any developer to run their app on ARC via a new Chrome app packager, and it's allowing ARC to run on any desktop OS with a Chrome browser. ARC runs Windows, Mac, Linux, and Chrome OS thanks to Native Client (abbreviated "NaCL"). NaCL is a Chrome sandboxing technology that allows Chrome apps and plugins to run at "near native" speeds, taking full advantage of the system's CPU and GPU. Native Client turns Chrome into a development platform, write to it, and it'll run on all desktop Chrome browsers. Google ported a full android stack to Native Client, allowing android apps to run on most major OSes. With the original ARC release, there was no official process to getting an android app running on the Chrome platform (other than working with Google). Now Google has released the adorably-named ARC Welder, a Chrome app which will convert any android app into an ARC-powered Chrome app. It's mainly for developers to package up an APK and submit it to the Chrome Web Store, but anyone can package and launch an APK from the app directly."
Gary Edwards

Readium at the London Book Fair 2014: Open Source for an Open Publishing Ecosystem: Readium.org Turns One - 0 views

  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
Gary Edwards

The True Story of How the Patent Bar Captured a Court and Shrank the Intellectual Commons | Cato Unbound - 1 views

  • The change in the law wrought by the Federal Circuit can also be viewed substantively through the controversy over software patents. Throughout the 1960s, the USPTO refused to award patents for software innovations. However, several of the USPTO’s decisions were overruled by the patent-friendly U.S. Court of Customs and Patent Appeals, which ordered that software patents be granted. In Gottschalk v. Benson (1972) and Parker v. Flook (1978), the U.S. Supreme Court reversed the Court of Customs and Patent Appeals, holding that mathematical algorithms (and therefore software) were not patentable subject matter. In 1981, in Diamond v. Diehr, the Supreme Court upheld a software patent on the grounds that the patent in question involved a physical process—the patent was issued for software used in the molding of rubber. While affirming their prior ruling that mathematical formulas are not patentable in the abstract, the Court held that an otherwise patentable invention did not become unpatentable simply because it utilized a computer.
  • In the hands of the newly established Federal Circuit, however, this small scope for software patents in precedent was sufficient to open the floodgates. In a series of decisions culminating in State Street Bank v. Signature Financial Group (1998), the Federal Circuit broadened the criteria for patentability of software and business methods substantially, allowing protection as long as the innovation “produces a useful, concrete and tangible result.” That broadened criteria led to an explosion of low-quality software patents, from Amazon’s 1-Click checkout system to Twitter’s pull-to-refresh feature on smartphones. The GAO estimates that more than half of all patents granted in recent years are software-related. Meanwhile, the Supreme Court continues to hold, as in Parker v. Flook, that computer software algorithms are not patentable, and has begun to push back against the Federal Circuit. In Bilski v. Kappos (2010), the Supreme Court once again held that abstract ideas are not patentable, and in Alice v. CLS (2014), it ruled that simply applying an abstract idea on a computer does not suffice to make the idea patent-eligible. It still is not clear what portion of existing software patents Alice invalidates, but it could be a significant one.
  • Supreme Court justices also recognize the Federal Circuit’s insubordination. In oral arguments in Carlsbad Technology v. HIF Bio (2009), Chief Justice John Roberts joked openly about it:
  • ...17 more annotations...
  • The Opportunity of the Commons
  • As a result of the Federal Circuit’s pro-patent jurisprudence, our economy has been flooded with patents that would otherwise not have been granted. If more patents meant more innovation, then we would now be witnessing a spectacular economic boom. Instead, we have been living through what Tyler Cowen has called a Great Stagnation. The fact that patents have increased while growth has not is known in the literature as the “patent puzzle.” As Michele Boldrin and David Levine put it, “there is no empirical evidence that [patents] serve to increase innovation and productivity, unless productivity is identified with the number of patents awarded—which, as evidence shows, has no correlation with measured productivity.”
  • While more patents have not resulted in faster economic growth, they have resulted in more patent lawsuits.
  • Software patents have characteristics that make them particularly susceptible to litigation. Unlike, say, chemical patents, software patents are plagued by a problem of description. How does one describe a software innovation in such a way that anyone searching for it will easily find it? As Christina Mulligan and Tim Lee demonstrate, chemical formulas are indexable, meaning that as the number of chemical patents grow, it will still be easy to determine if a molecule has been patented. Since software innovations are not indexable, they estimate that “patent clearance by all firms would require many times more hours of legal research than all patent lawyers in the United States can bill in a year. The result has been an explosion of patent litigation.” Software and business method patents, estimate James Bessen and Michael Meurer, are 2 and 7 times more likely to be litigated than other patents, respectively (4 and 13 times more likely than chemical patents).
  • Software patents make excellent material for predatory litigation brought by what are often called “patent trolls.”
  • Trolls use asymmetries in the rules of litigation to legally extort millions of dollars from innocent parties. For example, one patent troll, Innovatio IP Ventures, LLP, acquired patents that implicated Wi-Fi. In 2011, it started sending demand letters to coffee shops and hotels that offered wireless Internet access, offering to settle for $2,500 per location. This amount was far in excess of the 9.56 cents per device that Innovatio was entitled to under the “Fair, Reasonable, and Non-Discriminatory” licensing promises attached to their portfolio, but it was also much less than the cost of trial, and therefore it was rational for firms to pay. Cisco stepped in and spent $13 million in legal fees on the case, and settled on behalf of their customers for 3.2 cents per device. Other manufacturers had already licensed Innovatio’s portfolio, but that didn’t stop their customers from being targeted by demand letters.
  • Litigation cost asymmetries are magnified by the fact that most patent trolls are nonpracticing entities. This means that when patent infringement trials get to the discovery phase, they will cost the troll very little—a firm that does not operate a business has very few records to produce.
  • But discovery can cost a medium or large company millions of dollars. Using an event study methodology, James Bessen and coauthors find that infringement lawsuits by nonpracticing entities cost publicly traded companies $83 billion per year in stock market capitalization, while plaintiffs gain less than 10 percent of that amount.
  • Software patents also reduce innovation in virtue of their cumulative nature and the fact that many of them are frequently inputs into a single product. Law professor Michael Heller coined the phrase “tragedy of the anticommons” to refer to a situation that mirrors the well-understood “tragedy of the commons.” Whereas in a commons, multiple parties have the right to use a resource but not to exclude others, in an anticommons, multiple parties have the right to exclude others, and no one is therefore able to make effective use of the resource. The tragedy of the commons results in overuse of the resource; the tragedy of the anticommons results in underuse.
  • In order to cope with the tragedy of the anticommons, we should carefully investigate the opportunity of  the commons. The late Nobelist Elinor Ostrom made a career of studying how communities manage shared resources without property rights. With appropriate self-governance institutions, Ostrom found again and again that a commons does not inevitably lead to tragedy—indeed, open access to shared resources can provide collective benefits that are not available under other forms of property management.
  • This suggests that—litigation costs aside—patent law could be reducing the stock of ideas rather than expanding it at current margins.
  • Advocates of extensive patent protection frequently treat the commons as a kind of wasteland. But considering the problems in our patent system, it is worth looking again at the role of well-tailored limits to property rights in some contexts. Just as we all benefit from real property rights that no longer extend to the highest heavens, we would also benefit if the scope of patent protection were more narrowly drawn.
  • Reforming the Patent System
  • This analysis raises some obvious possibilities for reforming the patent system. Diane Wood, Chief Judge of the 7th Circuit, has proposed ending the Federal Circuit’s exclusive jurisdiction over patent appeals—instead, the Federal Circuit could share jurisdiction with the other circuit courts. While this is a constructive suggestion, it still leaves the door open to the Federal Circuit playing “a leading role in shaping patent law,” which is the reason for its capture by patent interests. It would be better instead simply to abolish the Federal Circuit and return to the pre-1982 system, in which patents received no special treatment in appeals. This leaves open the possibility of circuit splits, which the creation of the Federal Circuit was designed to mitigate, but there are worse problems than circuit splits, and we now have them.
  • Another helpful reform would be for Congress to limit the scope of patentable subject matter via statute. New ZealAnd has done just that, declaring that software is “not An invention” to get around WTO obligations to respect intellectual property. Congress should do the same with respect to both software And business methods.
  • Finally, even if the above reforms were adopted, there would still be a need to address the asymmetries in patent litigation that result in predatory “troll” lawsuits. While the holding in Alice v. CLS arguably makes a wide swath of patents invalid, those patents could still be used in troll lawsuits because a ruling of invalidity for each individual patent might not occur until late in a trial. Current legislation in Congress addresses this class of problem by mandating disclosures, shifting fees in the case of spurious lawsuits, and enabling a review of the patent’s validity before a trial commences.
  • What matters for prosperity is not just property rights in the abstract, but good property-defining institutions. Without reform, our patent system will continue to favor special interests and forestall economic growth.
  •  
    "Libertarians intuitively understand the case for patents: just as other property rights internalize the social benefits of improvements to land, automobile maintenance, or business investment, patents incentivize the creation of new inventions, which might otherwise be undersupplied. So far, so good. But it is important to recognize that the laws that govern property, intellectual or otherwise, do not arise out of thin air. Rather, our political institutions, with all their virtues and foibles, determine the contours of property-the exact bundle of rights that property holders possess, their extent, and their limitations. Outlining efficient property laws is not a trivial problem. The optimal contours of property are neither immutable nor knowable a priori. For example, in 1946, the U.S. Supreme Court reversed the age-old common law doctrine that extended real property rights to the heavens without limit. The advent of air travel made such extensive property rights no longer practicable-airlines would have had to cobble together a patchwork of easements, acre by acre, for every corridor through which they flew, and they would have opened themselves up to lawsuits every time their planes deviated from the expected path. The Court rightly abridged property rights in light of these empirical realities. In defining the limits of patent rights, our political institutions have gotten an analogous question badly wrong. A single, politically captured circuit court with exclusive jurisdiction over patent appeals has consistently expanded the scope of patentable subject matter. This expansion has resulted in an explosion of both patents and patent litigation, with destructive consequences. "
  •  
    I added a comment to the page's article. Patents are antithetical to the precepts of Libertarianism and do not involve Natural Law rights. But I agree with the author that the Court of Appeals for the Federal Circuit should be abolished. It's a failed experiment.
Gary Edwards

Two Microsofts: Mulling an alternate reality | ZDNet - 1 views

  • Judge Jackson had it right. And the Court of Appeals? Not so much
  • Judge Jackson is an American hero and news of his passing thumped me hard. His ruling against Microsoft and the subsequent overturn of that ruling resulted, IMHO, in two extraordinary directions that changed the world. Sure the what-if game is interesting, but the reality itself is stunning enough. Of course, Judge Jackson sought to break the monopoly. The US Court of Appeals overturn resulted in the monopoly remaining intact, but the Internet remaining free and open. Judge Jackson's breakup plan had a good shot at achieving both a breakup of the monopoly and, a free and open Internet. I admit though that at the time I did not favor the Judge's plan. and i actually did submit a proposal based on Microsoft having to both support the WiNE project, and, provide a complete port to WiNE to any software provider requesting a port. I wanted to break the monopolist's hold on the Windows Productivity Environment and the hundreds of millions of investment dollars and time that had been spent on application development forever trapped on that platform. For me, it was the productivity platform that had to be broken.
  • I assume the good Judge thought that separating the Windows OS from Microsoft Office / Applications would force the OS to open up the secret API's even as the OS continued to evolve. Maybe. But a full disclosure of the API's coupled with the community service "port to WiNE" requirement might have sped up the process. Incredibly, the "Undocumented Windows Secrets" industry continues to thrive, and the legendary andrew Schulman's number is still at the top of Silicon Valley legal profession speed dials. http://goo.gl/0UGe8 Oh well. The Court of Appeals stopped the breakup, leaving the Windows Productivity Platform intact. Microsoft continues to own the "client" in "Client/Server" computing. Although Microsoft was temporarily stopped from leveraging their desktop monopoly to an iron fisted control and dominance of the Internet, I think what were watching today with the Cloud is Judge Jackson's worst nightmare. and mine too. A great transition is now underway, as businesses and enterprises begin the move from legacy client/server business systems and processes to a newly emerging Cloud Productivity Platform. In this great transition, Microsoft holds an inside straight. They have all the aces because they own the legacy desktop productivity platform, and can control the transition to the Cloud. No doubt this transition is going to happen. and it will severely disrupt and change Microsoft's profit formula. But if the Redmond reprobate can provide a "value added" transition of legacy business systems and processes, and direct these new systems to the Microsoft Cloud, the profits will be immense.
  • ...1 more annotation...
  • Judge Jackson sought to break the ability of Microsoft to "leverage" their existing monopoly into the Internet and his plan was overturned and replaced by one based on judicial oversight. Microsoft got a slap on the wrist from the Court of Appeals, but were wailed on with lawsuits from the hundreds of parties injured by their rampant criminality. Some put the price of that criminality as high as $14 Billion in settlements. Plus, the shareholders forced Chairman Bill to resign. At the end of the day though, Chairman Bill was right. Keeping the monopoly intact was worth whatever penalty Microsoft was forced to pay. He knew that even the judicial over-site would end one day. Which it did. and now his company is ready to go for it all by leveraging and controlling the great productivity transition. No business wants to be hostage to a cold heart'd monopolist. But there is huge difference between a non-disruptive and cost effective, process-by-process value-added transition to a Cloud Productivity Platform, and, the very disruptive and costly "rip-out-and-replace" transition offered by Google, ZOHO, Box, SalesForce and other Cloud Productivity contenders. Microsoft, and only Microsoft, can offer the value-added transition path. If they get the Cloud even halfway right, they will own business productivity far into the future. Rest in Peace Judge Jackson. Your efforts were heroic and will be remembered as such. ~ge~
  •  
    Comments on the latest SVN article mulling the effects of Judge Thomas Penfield Jackson's anti trust ruling and proposed break up of Microsoft. comment: "Chinese Wall" Ummm, there was a Chinese Wall between Microsoft Os and the MS Applciations layer. At least that's what Chairman Bill promised developers at a 1990 OS/2-Windows Conference I attended. It was a developers luncheon, hosted by Microsoft, with Chairman Bill speaking to about 40 developers with applications designed to run on the then soon to be released Windows 3.0. In his remarks, the Chairman described his vision of commoditizing the personal computer market through an open hardware-reference platform on the one side of the Windows OS, and provisioning an open application developers layer on the other using open and totally transparent API's. Of course the question came up concerning the obvious advantage Microsoft applications would have. Chairman Bill answered the question by describing the Chinese Wall that existed between Microsoft's OS and Apps develop departments. He promised that OS API's would be developed privately and separate from the Apps department, and publicly disclosed to ALL developers at the same time. Oh yeah. There was lots of anti IBM - evil empire stuff too :) Of course we now know this was a line of crap. Microsoft Apps was discovered to have been using undocumented and secret Window API's. http://goo.gl/0UGe8. Microsoft Apps had a distinct advantage over the competition, and eventually the entire Windows Productivity Platform became dependent on the MSOffice core. The company I worked for back then, Pyramid Data, had the first Contact Management application for Windows; PowerLeads. Every Friday night we would release bug fixes and improvements using Wildcat BBS. By Monday morning we would be slammed with calls from users complaining that they had downloaded the Friday night patch, and now some other application would not load or function properly. Eventually we tracked th
Paul Merrell

U.S. Says It Spied on 89,000 Targets Last Year, But the Number Is Deceptive | Threat Level | WIRED - 0 views

  • About 89,000 foreigners or organizations were targeted for spying under a U.S. surveillance order last year, according to a new transparency report. The report was released for the first time Friday by the Office of the Director of Intelligence, upon order of the president, in the wake of surveillance leaks by NSA whistleblower Edward Snowden. But the report, which covers only surveillance orders issued in 2013, doesn’t tell the whole story about how many individuals the spying targeted or how many Americans were caught in the surveillance that targeted foreigners. Civil liberties groups say the real number is likely “orders of magnitude” larger than this. “Even if it was an honest definition of ‘target’—that is, an individual instead of a group—that also is not encompassing those who are ancillary to a target and are caught up in the dragnet,” says Kurt Opsahl, deputy general counsel of the Electronic Frontier Foundation.
  • The report, remarkably, shows that the government obtained just one order last year under Section 702 of FISA—which allows for bulk collection of data on foreigners—and that this one order covered 89,138 targets. But, as the report notes, “target” can refer to “an individual person, a group, an organization composed of multiple individuals or a foreign power that possesses or is likely to communicate foreign intelligence information.” Furthermore, Section 702 orders are actually certificates issued by the FISA Court that can cover surveillance of an entire facility. and since, as the government points out in its report, the government cannot know how many people use a facility, the figure only “reflects an estimate of the number of known users of particular facilities (sometimes referred to as selectors) subject to intelligence collection under those Certifications,” the report notes.
  • “If you’re actually trying to get a sense of the number of human beings affected or the number of Americans affected, the number of people affected is vastly, vastly larger,” says Julian Sanchez, senior fellow at the Cato Institute. “and how many of those are Americans is impossible to say. But [although] you may not think you are routinely communicating with foreign persons, [this] is not any kind of assurance that your communications are not part of the traffic subject to interception.” Sanchez points out that each individual targeted is likely communicating with dozens or hundred of others, whose communications will be picked up in the surveillance. “and probably a lot of these targets are not individuals but entire web sites or companies. While [a company like the Chinese firm] Huawei might be a target, thousands of emails used by thousands of employees will be swept up.” How many of those employees might be American or communicating with Americans is unknown.
  • ...5 more annotations...
  • Also revealed in today’s report is the number of times the government has queried the controversial phone records database it created by collecting the phone records of every subscriber from U.S. providers. According to the report, the government used 423 “selectors” to search its massive phone records database, which includes records going back to at least 2006 when the program began. A search involves querying a specific phone number or device ID that appears in the database. The government has long maintained that its collection of phone records isn’t a violation of its authority, since it only views the records of specific individuals targeted in an investigation. But such searches, even if targeted at phone numbers used by foreigners, would include calls made to and from Americans as well as calls exchanged with people two or three hops out from the targeted number.
  • In its report, the government indicated that the 423 selectors involved just 248 “known or presumed” Americans whose information was collected by the agency in the database. But Opsahl says that both of these numbers are deceptive given what we know about the database and how it’s been used. “We know it’s affecting millions of people,” he points out. But “then we have estimated numbers of affected people [that are just] in the three digits. That requires some effort [on the government's part] to find a way to do the definition of the number [in such a way] to make it as small as possible.”
  • One additional figure today’s report covers is the number of National Security Letters the government issued last year to businesses to obtain data on accountholders and users—19,212. NSLs are written demands from the FBI that compel internet service providers, credit companies, financial institutions and others to hand over confidential records about their customers, such as subscriber information, phone numbers and e-mail addresses, websites visited, and more. These letters are a powerful tool because they do not require court approval, and they come with a built-in gag order, preventing recipients from disclosing to anyone that they have received an NSL. an FBI agent looking into a possible anti-terrorism case can self-issue an NSL to a credit bureau, ISP, or phone company with only the sign-off of the Special Agent in Charge of their office. The FBI has merely to assert that the information is “relevant” to an investigation into international terrorism or clandestine intelligence activities.
  • The FBI has issued hundreds of thousands of NSLs over the years and has been reprimanded for abusing them. Last year a federal judge ruled that the use of NSLs is unconstitutional, due to the gag order that accompanies them, and ordered the government to stop using them. Her ruling, however, was stayed pending the government’s appeal.
  • According to the government’s report today, the 19,000 NSLs issued last year involved more than 38,000 requests for information.
Paul Merrell

The Latest Rules on How Long NSA Can Keep Americans' Encrypted Data Look Too Familiar | Just Security - 0 views

  • Does the National Security Agency (NSA) have the authority to collect and keep all encrypted Internet traffic for as long as is necessary to decrypt that traffic? That was a question first raised in June 2013, after the minimization procedures governing telephone and Internet records collected under Section 702 of the Foreign Intelligence Surveillance Act were disclosed by Edward Snowden. The issue quickly receded into the background, however, as the world struggled to keep up with the deluge of surveillance disclosures. The Intelligence Authorization Act of 2015, which passed Congress this last December, should bring the question back to the fore. It established retention guidelines for communications collected under Executive Order 12333 and included an exception that allows NSA to keep ‘incidentally’ collected encrypted communications for an indefinite period of time. This creates a massive loophole in the guidelines. NSA’s retention of encrypted communications deserves further consideration today, now that these retention guidelines have been written into law. It has become increasingly clear over the last year that surveillance reform will be driven by technological change—specifically by the growing use of encryption technologies. Therefore, any legislation touching on encryption should receive close scrutiny.
  • Section 309 of the intel authorization bill describes “procedures for the retention of incidentally acquired communications.” It establishes retention guidelines for surveillance programs that are “reasonably anticipated to result in the acquisition of [telephone or electronic communications] to or from a United States person.” Communications to or from a United States person are ‘incidentally’ collected because the U.S. person is not the actual target of the collection. Section 309 states that these incidentally collected communications must be deleted after five years unless they meet a number of exceptions. One of these exceptions is that “the communication is enciphered or reasonably believed to have a secret meaning.” This exception appears to be directly lifted from NSA’s minimization procedures for data collected under Section 702 of FISA, which were declassified in 2013. 
  • While Section 309 specifically applies to collection taking place under E.O. 12333, not FISA, several of the exceptions described in Section 309 closely match exceptions in the FISA minimization procedures. That includes the exception for “enciphered” communications. Those minimization procedures almost certainly served as a model for these retention guidelines and will likely shape how this new language is interpreted by the Executive Branch. Section 309 also asks the heads of each relevant member of the intelligence community to develop procedures to ensure compliance with new retention requirements. I expect those procedures to look a lot like the FISA minimization guidelines.
  • ...6 more annotations...
  • This language is broad, circular, and technically incoherent, so it takes some effort to parse appropriately. When the minimization procedures were disclosed in 2013, this language was interpreted by outside commentators to mean that NSA may keep all encrypted data that has been incidentally collected under Section 702 for at least as long as is necessary to decrypt that data. Is this the correct interpretation? I think so. It is important to realize that the language above isn’t just broad. It seems purposefully broad. The part regarding relevance seems to mirror the rationale NSA has used to justify its bulk phone records collection program. Under that program, all phone records were relevant because some of those records could be valuable to terrorism investigations and (allegedly) it isn’t possible to collect only those valuable records. This is the “to find a needle a haystack, you first have to have the haystack” argument. The same argument could be applied to encrypted data and might be at play here.
  • This exception doesn’t just apply to encrypted data that might be relevant to a current foreign intelligence investigation. It also applies to cases in which the encrypted data is likely to become relevant to a future intelligence requirement. This is some remarkably generous language. It seems one could justify keeping any type of encrypted data under this exception. Upon close reading, it is difficult to avoid the conclusion that these procedures were written carefully to allow NSA to collect and keep a broad category of encrypted data under the rationale that this data might contain the communications of NSA targets and that it might be decrypted in the future. If NSA isn’t doing this today, then whoever wrote these minimization procedures wanted to at least ensure that NSA has the authority to do this tomorrow.
  • There are a few additional observations that are worth making regarding these nominally new retention guidelines and Section 702 collection. First, the concept of incidental collection as it has typically been used makes very little sense when applied to encrypted data. The way that NSA’s Section 702 upstream “about” collection is understood to work is that technology installed on the network does some sort of pattern match on Internet traffic; say that an NSA target uses example@gmail.com to communicate. NSA would then search content of emails for references to example@gmail.com. This could notionally result in a lot of incidental collection of U.S. persons’ communications whenever the email that references example@gmail.com is somehow mixed together with emails that have nothing to do with the target. This type of incidental collection isn’t possible when the data is encrypted because it won’t be possible to search and find example@gmail.com in the body of an email. Instead, example@gmail.com will have been turned into some alternative, indecipherable string of bits on the network. Incidental collection shouldn’t occur because the pattern match can’t occur in the first place. This demonstrates that, when communications are encrypted, it will be much harder for NSA to search Internet traffic for a unique ID associated with a specific target.
  • This lends further credence to the conclusion above: rather than doing targeted collection against specific individuals, NSA is collecting, or plans to collect, a broad class of data that is encrypted. For example, NSA might collect all PGP encrypted emails or all Tor traffic. In those cases, NSA could search Internet traffic for patterns associated with specific types of communications, rather than specific individuals’ communications. This would technically meet the definition of incidental collection because such activity would result in the collection of communications of U.S. persons who aren’t the actual targets of surveillance. Collection of all Tor traffic would entail a lot of this “incidental” collection because the communications of NSA targets would be mixed with the communications of a large number of non-target U.S. persons. However, this “incidental” collection is inconsistent with how the term is typically used, which is to refer to over-collection resulting from targeted surveillance programs. If NSA were collecting all Tor traffic, that activity wouldn’t actually be targeted, and so any resulting over-collection wouldn’t actually be incidental. Moreover, greater use of encryption by the general public would result in an ever-growing amount of this type of incidental collection.
  • This type of collection would also be inconsistent with representations of Section 702 upstream collection that have been made to the public and to Congress. Intelligence officials have repeatedly suggested that search terms used as part of this program have a high degree of specificity. They have also argued that the program is an example of targeted rather than bulk collection. ODNI General Counsel Robert Litt, in a March 2014 meeting before the Privacy and Civil Liberties Oversight Board, stated that “there is either a misconception or a mischaracterization commonly repeated that Section 702 is a form of bulk collection. It is not bulk collection. It is targeted collection based on selectors such as telephone numbers or email addresses where there’s reason to believe that the selector is relevant to a foreign intelligence purpose.” The collection of Internet traffic based on patterns associated with types of communications would be bulk collection; more akin to NSA’s collection of phone records en mass than it is to targeted collection focused on specific individuals. Moreover, this type of collection would certainly fall within the definition of bulk collection provided just last week by the National Academy of Sciences: “collection in which a significant portion of the retained data pertains to identifiers that are not targets at the time of collection.”
  • The Section 702 minimization procedures, which will serve as a template for any new retention guidelines established for E.O. 12333 collection, create a large loophole for encrypted communications. With everything from email to Internet browsing to real-time communications moving to encrypted formats, an ever-growing amount of Internet traffic will fall within this loophole.
  •  
    Tucked into a budget authorization act in December without press notice. Section 309 (the Act is linked from the article) appears to be very broad authority for the NSA to intercept any form of telephone or other electronic information in bulk. There are far more exceptions from the five-year retention limitation than the encrypted information exception. When reading this, keep in mind that the U.S. intelligence community plays semantic games to obfuscate what it does. One of its word plays is that communications are not "collected" until an analyst looks at or listens to partiuclar data, even though the data will be searched to find information countless times before it becomes "collected." That searching was the major basis for a decision by the U.S. District Court in Washington, D.C. that bulk collection of telephone communications was unconstitutional: Under the Fourth Amendment, a "search" or "seizure" requiring a judicial warrant occurs no later than when the information is intercepted. That case is on appeal, has been briefed and argued, and a decision could come any time now. Similar cases are pending in two other courts of appeals. Also, an important definition from the new Intelligence Authorization Act: "(a) DEFINITIONS.-In this section: (1) COVERED COMMUNICATION.-The term ''covered communication'' means any nonpublic telephone or electronic communication acquired without the consent of a person who is a party to the communication, including communications in electronic storage."       
Gary Edwards

Skynet rising: Google acquires 512-qubit quantum computer; NSA surveillance to be turned over to AI machines Alex Jones' Infowars: There's a war on for your mind! - 0 views

  •  
    "The ultimate code breakers" If you know anything about encryption, you probably also realize that quantum computers are the secret KEY to unlocking all encrypted files. As I wrote about last year here on Natural News, once quantum computers go into widespread use by the NSA, the CIA, Google, etc., there will be no more secrets kept from the government. All your files - even encrypted files - will be easily opened and read. Until now, most people believed this day was far away. Quantum computing is an "impractical pipe dream," we've been told by scowling scientists and "flat Earth" computer engineers. "It's not possible to build a 512-qubit quantum computer that actually works," they insisted. Don't tell that to Eric Ladizinsky, co-founder and chief scientist of a company called D-Wave. Because Ladizinsky's team has already built a 512-qubit quantum computer. and they're already selling them to wealthy corporations, too. DARPA, Northrup Grumman and Goldman Sachs In case you're wondering where Ladizinsky came from, he's a former employee of Northrup Grumman Space Technology (yes, a weapons manufacturer) where he ran a multi-million-dollar quantum computing research project for none other than DARPA - the same group working on AI-driven armed assault vehicles and battlefield robots to replace human soldiers. .... When groundbreaking new technology is developed by smart people, it almost immediately gets turned into a weapon. Quantum computing will be no different. This technology grants God-like powers to police state governments that seek to dominate and oppress the People.  ..... Google acquires "Skynet" quantum computers from D-Wave According to an article published in Scientific American, Google and NASA have now teamed up to purchase a 512-qubit quantum computer from D-Wave. The computer is called "D-Wave Two" because it's the second generation of the system. The first system was a 128-qubit computer. Gen two
  •  
    Normally, I'd be suspicious of anything published by Infowars because its editors are willing to publish really over the top stuff, but: [i] this is subject matter I've maintained an interest in over the years and I was aware that working quantum computers were imminent; and [ii] the pedigree on this particular information does not trace to Scientific American, as stated in the article. I've known Scientific American to publish at least one soothing and lengthy article on the subject of chlorinated dioxin hazard -- my specialty as a lawyer was litigating against chemical companies that generated dioxin pollution -- that was generated by known closet chemical industry advocates long since discredited and was totally lacking in scientific validity and contrary to established scientific knowledge. So publication in Scientific American doesn't pack a lot of weight with me. But checking the Scientific American linked article, notes that it was reprinted by permission from Nature, a peer-reviewed scientific journal and news organization that I trust much more. That said, the InfoWars version is a rewrite that contains lots of information not in the Nature/Scientific American version of a sensationalist nature, so heightened caution is still in order. Check the reprinted Nature version before getting too excited: "The D-Wave computer is not a 'universal' computer that can be programmed to tackle any kind of problem. But scientists have found they can usefully frame questions in machine-learning research as optimisation problems. "D-Wave has battled to prove that its computer really operates on a quantum level, and that it is better or faster than a conventional computer. Before striking the latest deal, the prospective customers set a series of tests for the quantum computer. D-Wave hired an outside expert in algorithm-racing, who concluded that the speed of the D-Wave Two was above average overall, and that it was 3,600 times faster than a leading conventional comput
Gonzalo San Gil, PhD.

The death of patents and what comes after: Alicia Gibb at TEDxStockholm - YouTube - 0 views

  •  
    "Published on Dec 18, 2012 Alicia Gibb got her start as a technologist from her combination in backgrounds including: informatics and library science, a belief system of freedom of information, inspiration from art and design, and a passion for hardware hacking. Alicia has worked between the crossroads of art and electronics for the past nine years, and has worked for the open source hardware community for the past three. She currently founded and is running the Open Source Hardware Association, an organization to educate and promote building and using open source hardware of all types. In her spare time, Alicia is starting an open source hardware company specific to education. Previous to becoming an advocate and an entrepreneur, Alicia was a researcher and prototyper at Bug Labs where she ran the academic research program and the Test Kitchen, an open R&D Lab. Her projects centered around developing lightweight additions to the BUG platform, as well as a sensor-based data collection modules. She is a member of NYCResistor, co-chair of the Open Hardware Summit, and a member of the advisory board for Linux Journal. She holds a degree in art education, a M.S. in Art History and a M.L.I.S. in Information Science from Pratt Institute. She is self-taught in electronics. Her electronics work has appeared in Wired magazine, IEEE Spectrum, Hackaday and the New York Times. When Alicia is not researching at the crossroads of open technology and innovation she is prototyping artwork that twitches, blinks, and might even be tasty to eat. In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual
Gary Edwards

Apple and Facebook Flash Forward to Computer Memory of the Future | Enterprise | WIRED - 1 views

  •  
    Great story that is at the center of a new cloud computing platform. I met David Flynn back when he was first demonstrating the Realmsys flash card. Extraordinary stuff. He was using the technology to open a secure Linux computing window on an operating Windows XP system. The card opened up a secure data socket, connecting to any Internet Server or Data Server, and running applications on that data - while running Windows and Windows apps in the background. Incredible mesh of Linux, streaming data, and legacy Windows apps. Everytime I find these tech pieces explaining Fusion-io though, I can't help but think that David Flynn is one of the most decent, kind and truly deserving of success people that I have ever met. excerpt: "Apple is spending mountains of money on a new breed of hardware device from a company called Fusion-io. As a public company, Fusion-io is required to disclose information about customers that account for an usually large portion of its revenue, and with its latest annual report, the Salt Lake City outfit reveals that in 2012, at least 25 percent of its revenue - $89.8 million - came from Apple. That's just one figure, from just one company. But it serves as a sign post, showing you where the modern data center is headed. 'There's now a blurring between the storage world and the memory world. People have been enlightened by Fusion-io.' - Gary Gentry Inside a data center like the one Apple operates in Maiden, North Carolina, you'll find thousands of computer servers. Fusion-io makes a slim card that slots inside these machines, and it's packed with hundreds of gigabytes of flash memory, the same stuff that holds all the software and the data on your smartphone. You can think of this card as a much-needed replacement for the good old-fashioned hard disk that typically sits inside a server. Much like a hard disk, it stores information. But it doesn't have any moving parts, which means it's generally more reliable. It c
Paul Merrell

Revealed: How DOJ Gagged Google over Surveillance of WikiLeaks Volunteer - The Intercept - 0 views

  • The Obama administration fought a legal battle against Google to secretly obtain the email records of a security researcher and journalist associated with WikiLeaks. Newly unsealed court documents obtained by The Intercept reveal the Justice Department won an order forcing Google to turn over more than one year’s worth of data from the Gmail account of Jacob Appelbaum (pictured above), a developer for the Tor online anonymity project who has worked with WikiLeaks as a volunteer. The order also gagged Google, preventing it from notifying Appelbaum that his records had been provided to the government. The surveillance of Appelbaum’s Gmail account was tied to the Justice Department’s long-running criminal investigation of WikiLeaks, which began in 2010 following the transparency group’s publication of a large cache of U.S. government diplomatic cables. According to the unsealed documents, the Justice Department first sought details from Google about a Gmail account operated by Appelbaum in January 2011, triggering a three-month dispute between the government and the tech giant. Government investigators demanded metadata records from the account showing email addresses of those with whom Appelbaum had corresponded between the period of November 2009 and early 2011; they also wanted to obtain information showing the unique IP addresses of the computers he had used to log in to the account.
  • The Justice Department argued in the case that Appelbaum had “no reasonable expectation of privacy” over his email records under the Fourth Amendment, which protects against unreasonable searches and seizures. Rather than seeking a search warrant that would require it to show probable cause that he had committed a crime, the government instead sought and received an order to obtain the data under a lesser standard, requiring only “reasonable grounds” to believe that the records were “relevant and material” to an ongoing criminal investigation. Google repeatedly attempted to challenge the demand, and wanted to immediately notify Appelbaum that his records were being sought so he could have an opportunity to launch his own legal defense. Attorneys for the tech giant argued in a series of court filings that the government’s case raised “serious First Amendment concerns.” They noted that Appelbaum’s records “may implicate journalistic and academic freedom” because they could “reveal confidential sources or information about WikiLeaks’ purported journalistic or academic activities.” However, the Justice Department asserted that “journalists have no special privilege to resist compelled disclosure of their records, absent evidence that the government is acting in bad faith,” and refused to concede Appelbaum was in fact a journalist. It claimed it had acted in “good faith throughout this criminal investigation, and there is no evidence that either the investigation or the order is intended to harass the … subscriber or anyone else.” Google’s attempts to fight the surveillance gag order angered the government, with the Justice Department stating that the company’s “resistance to providing the records” had “frustrated the government’s ability to efficiently conduct a lawful criminal investigation.”
  • Google accused the government of hyperbole and argued that the backlash over the Twitter order did not justify secrecy related to the Gmail surveillance. “Rather than demonstrating how unsealing the order will harm its well-publicized investigation, the government lists a parade of horribles that have allegedly occurred since it unsealed the Twitter order, yet fails to establish how any of these developments could be further exacerbated by unsealing this order,” wrote Google’s attorneys. “The proverbial toothpaste is out of the tube, and continuing to seal a materially identical order will not change it.” But Google’s attempt to overturn the gag order was denied by magistrate judge Ivan D. Davis in February 2011. The company launched an appeal against that decision, but this too was rebuffed, in March 2011, by District Court judge Thomas Selby Ellis, III.
  • ...4 more annotations...
  • The Justice Department wanted to keep the surveillance secret largely because of an earlier public backlash over its WikiLeaks investigation. In January 2011, Appelbaum and other WikiLeaks volunteers’ – including Icelandic parlimentarian Birgitta Jonsdottir – were notified by Twitter that the Justice Department had obtained data about their accounts. This disclosure generated widepread news coverage and controversy; the government says in the unsealed court records that it “failed to anticipate the degree of  damage that would be caused” by the Twitter disclosure and did not want to “exacerbate this problem” when it went after Appelbaum’s Gmail data. The court documents show the Justice Department said the disclosure of its Twitter data grab “seriously jeopardized the [WikiLeaks] investigation” because it resulted in efforts to “conceal evidence” and put public pressure on other companies to resist similar surveillance orders. It also claimed that officials named in the subpeona ordering Twitter to turn over information were “harassed” after a copy was published by Intercept co-founder Glenn Greenwald at Salon in 2011. (The only specific evidence of the alleged harassment cited by the government is an email that was sent to an employee of the U.S. Attorney’s office that purportedly said: “You guys are fucking nazis trying to controll [sic] the whole fucking world. Well guess what. WE DO NOT FORGIVE. WE DO NOT FORGET. EXPECT US.”)
  • The government agreed to unseal some of the court records on Apr. 1 this year, and they were apparently turned over to Appelbaum on May 14 through a notification sent to his Gmail account. The files were released on condition that they would contain some redactions, which are bizarre and inconsistent, in some cases censoring the name of “WikiLeaks” from cited public news reports. Not all of the documents in the case – such as the original surveillance orders contested by Google – were released as part of the latest disclosure. Some contain “specific and sensitive details of the investigation” and “remain properly sealed while the grand jury investigation continues,” according to the court records from April this year. Appelbaum, an American citizen who is based in Berlin, called the case “a travesty that continues at a slow pace” and said he felt it was important to highlight “the absolute madness in these documents.”
  • He told The Intercept: “After five years, receiving such legal documents is neither a shock nor a needed confirmation. … Will we ever see the full documents about our respective cases? Will we even learn the names of those signing so-called legal orders against us in secret sealed documents? Certainly not in a timely manner and certainly not in a transparent, just manner.” The 32-year-old, who has recently collaborated with Intercept co-founder Laura Poitras to report revelations about National Security Agency surveillance for German news magazine Der Spiegel, said he plans to remain in Germany “in exile, rather than returning to the U.S. to experience more harassment of a less than legal kind.”
  • “My presence in Berlin ensures that the cost of physically harassing me or politically harassing me is much higher than when I last lived on U.S. soil,” Appelbaum said. “This allows me to work as a journalist freely from daily U.S. government interference. It also ensures that any further attempts to continue this will be forced into the open through [a Mutal Legal Assistance Treaty] and other international processes. The German goverment is less likely to allow the FBI to behave in Germany as they do on U.S. soil.” The Justice Department’s WikiLeaks investigaton is headed by prosecutors in the Eastern District of Virginia. Since 2010, the secretive probe has seen activists affiliated with WikiLeaks compelled to appear before a grand jury and the FBI attempting to infiltrate the group with an informant. Earlier this year, it was revealed that the government had obtained the contents of three core WikiLeaks staffers’ Gmail accounts as part of the investigation.
Paul Merrell

Joint - Dear Colleague Letter: Electronic Book Readers - 0 views

  • U.S. Department of Justice Civil Rights Division U.S. Department of Education Office for Civil Rights
  •  
    June 29, 2010 Dear College or University President: We write to express concern on the part of the Department of Justice and the Department of Education that colleges and universities are using electronic book readers that are not accessible to students who are blind or have low vision and to seek your help in ensuring that this emerging technology is used in classroom settings in a manner that is permissible under federal law. A serious problem with some of these devices is that they lack an accessible text-to-speech function. Requiring use of an emerging technology in a classroom environment when the technology is inaccessible to an entire population of individuals with disabilities - individuals with visual disabilities - is discrimination prohibited by the Americans with Disabilities Act of 1990 (ADA) and Section 504 of the Rehabilitation Act of 1973 (Section 504) unless those individuals are provided accommodations or modifications that permit them to receive all the educational benefits provided by the technology in an equally effective and equally integrated manner. ... The Department of Justice recently entered into settlement agreements with colleges and universities that used the Kindle DX, an inaccessible, electronic book reader, in the classroom as part of a pilot study with Amazon.com, Inc. In summary, the universities agreed not to purchase, require, or recommend use of the Kindle DX, or any other dedicated electronic book reader, unless or until the device is fully accessible to individuals who are blind or have low vision, or the universities provide reasonable accommodation or modification so that a student can acquire the same information, engage in the same interactions, and enjoy the same services as sighted students with substantially equivalent ease of use. The texts of these agreements may be viewed on the Department of Justice's ADA Web site, www.ada.gov. (To find these settlements on www.ada.gov, search for "Kindle.") Consisten
Gary Edwards

Cisco buys PostPath: WebEx to compete with Exchange, Outlook, Office? | Between the Lines | ZDNet.com - 0 views

  •  
    Once you add in better email and calendar support WebEx could become more appealing to the enterprise. PostPath has a Linux based collaboration system built on an AJAX client that doesn't need a browser. Cisco added that the company's strategy is to develop "an integrated collaboration platform designed for how we work today and into the future. and better yet: PostPath's pitch is that it is an Exchange alternative and a "Linux-based corporate email server." Let's read between the lines: Doesn't this sound a lot like an end-run around Microsoft Office, Outlook and Exchange just like Google is trying to do with Google Apps? Cisco probably has no desire to compete head on with Microsoft (or at least admit it), but the company obviously sees something here and coupling PostPath with WebEx could be a threat to Redmond. In fact, Cisco could be a bigger threat to Microsoft in the enterprise than Google. Why? Cisco already sells enterprises a lot of stuff. Isn't a collaboration suite really just an extension of the network?
Paul Merrell

Tell Congress: My Phone Calls are My Business. Reform the NSA. | EFF Action Center - 3 views

  • The USA PATRIOT Act granted the government powerful new spying capabilities that have grown out of control—but the provision that the FBI and NSA have been using to collect the phone records of millions of innocent people expires on June 1. Tell Congress: it’s time to rethink out-of-control spying. A vote to reauthorize Section 215 is a vote against the Constitution.
  • On June 5, 2013, the Guardian published a secret court order showing that the NSA has interpreted Section 215 to mean that, with the help of the FBI, it can collect the private calling records of millions of innocent people. The government could even try to use Section 215 for bulk collection of financial records. The NSA’s defenders argue that invading our privacy is the only way to keep us safe. But the White House itself, along with the President’s Review Board has said that the government can accomplish its goals without bulk telephone records collection. and the Privacy and Civil Liberties Oversight Board said, “We have not identified a single instance involving a threat to the United States in which [bulk collection under Section 215 of the PATRIOT Act] made a concrete difference in the outcome of a counterterrorism investigation.” Since June of 2013, we’ve continued to learn more about how out of control the NSA is. But what has not happened since June is legislative reform of the NSA. There have been myriad bipartisan proposals in Congress—some authentic and some not—but lawmakers didn’t pass anything. We need comprehensive reform that addresses all the ways the NSA has overstepped its authority and provides the NSA with appropriate and constitutional tools to keep America safe. In the meantime, tell Congress to take a stand. A vote against reauthorization of Section 215 is a vote for the Constitution.
  •  
    EFF has launched an email campagin to press members of Congress not to renew sectiion 215 of the Patriot Act when it expires on June 1, 2015.   Sectjon 215 authorizes FBI officials to "make an application for an order requiring the production of *any tangible things* (including books, records, papers, documents, and other items) for an investigation to obtain foreign intelligence information not concerning a United States person or to protect against international terrorism or clandestine intelligence activities, provided that such investigation of a United States person is not conducted solely upon the basis of activities protected by the first amendment to the Constitution." http://www.law.cornell.edu/uscode/text/50/1861 The section has been abused to obtain bulk collecdtion of all telephone records for the NSA's storage and processing.But the section goes farther and lists as specific examples of records that can be obtained under section 215's authority, "library circulation records, library patron lists, book sales records, book customer lists, firearms sales records, tax return records, educational records, or medical records."  Think of the NSA's voracious appetite for new "haystacks" it can store  and search in its gigantic new data center in Utah. Then ask yourself, "do I want the NSA to obtain all of my personal data, store it, and search it at will?" If your anser is "no," you might consider visiting this page to send your Congress critters an email urging them to vote against renewal of section 215 and to vote for other NSA reforms listed in the EFF sample email text. Please do not procrastinate. Do it now, before you forget. Every voice counts. 
Paul Merrell

Hey ITU Member States: No More Secrecy, Release the Treaty Proposals | Electronic Frontier Foundation - 0 views

  • The International Telecommunication Union (ITU) will hold the World Conference on International Telecommunications (WCIT-12) in December in Dubai, an all-important treaty-writing event where ITU Member States will discuss the proposed revisions to the International Telecommunication Regulations (ITR). The ITU is a United Nations agency responsible for international telecom regulation, a bureaucratic, slow-moving, closed regulatory organization that issues treaty-level provisions for international telecommunication networks and services. The ITR, a legally binding international treaty signed by 178 countries, defines the boundaries of ITU’s regulatory authority and provides "general principles" on international telecommunications. However, media reports indicate that some proposed amendments to the ITR—a negotiation that is already well underway—could potentially expand the ITU’s mandate to encompass the Internet.
  • The International Telecommunication Union (ITU) will hold the World Conference on International Telecommunications (WCIT-12) in December in Dubai, an all-important treaty-writing event where ITU Member States will discuss the proposed revisions to the International Telecommunication Regulations (ITR). The ITU is a United Nations agency responsible for international telecom regulation, a bureaucratic, slow-moving, closed regulatory organization that issues treaty-level provisions for international telecommunication networks and services. The ITR, a legally binding international treaty signed by 178 countries, defines the boundaries of ITU’s regulatory authority and provides "general principles" on international telecommunications. However, media reports indicate that some proposed amendments to the ITR—a negotiation that is already well underway—could potentially expand the ITU’s mandate to encompass the Internet. In similar fashion to the secrecy surrounding ACTA and TPP, the ITR proposals are being negotiated in secret, with high barriers preventing access to any negotiating document. While aspiring to be a venue for Internet policy-making, the ITU Member States do not appear to be very open to the idea of allowing all stakeholders (including civil society) to participate. The framework under which the ITU operates does not allow for any form of open participation. Mere access to documents and decision-makers is sold by the ITU to corporate “associate” members at prohibitively high rates. Indeed, the ITU’s business model appears to depend on revenue generation from those seeking to ‘participate’ in its policy-making processes. This revenue-based principle of policy-making is deeply troubling in and of itself, as the objective of policy making should be to reach the best possible outcome.
  • EFF, European Digital Rights, CIPPIC and CDT and a coalition of civil society organizations from around the world are demanding that the ITU Secretary General, the  WCIT-12 Council Working Group, and ITU Member States open up the WCIT-12 and the Council working group negotiations, by immediately releasing all the preparatory materials and Treaty proposals. If it affects the digital rights of citizens across the globe, the public needs to know what is going on and deserves to have a say. The Council Working Group is responsible for the preparatory work towards WCIT-12, setting the agenda for and consolidating input from participating governments and Sector Members. We demand full and meaningful participation for civil society in its own right, and without cost, at the Council Working Group meetings and the WCIT on equal footing with all other stakeholders, including participating governments. A transparent, open process that is inclusive of civil society at every stage is crucial to creating sound policy.
  • ...5 more annotations...
  • Civil society has good reason to be concerned regarding an expanded ITU policy-making role. To begin with, the institution does not appear to have high regard for the distributed multi-stakeholder decision making model that has been integral to the development of an innovative, successful and open Internet. In spite of commitments at WSIS to ensure Internet policy is based on input from all relevant stakeholders, the ITU has consistently put the interests of one stakeholder—Governments—above all others. This is discouraging, as some government interests are inconsistent with an open, innovative network. Indeed, the conditions which have made the Internet the powerful tool it is today emerged in an environment where the interests of all stakeholders are given equal footing, and existing Internet policy-making institutions at least aspire, with varying success, to emulate this equal footing. This formula is enshrined in the Tunis Agenda, which was committed to at WSIS in 2005:
  • 83. Building an inclusive development-oriented Information Society will require unremitting multi-stakeholder effort. We thus commit ourselves to remain fully engaged—nationally, regionally and internationally—to ensure sustainable implementation and follow-up of the outcomes and commitments reached during the WSIS process and its Geneva and Tunis phases of the Summit. Taking into account the multifaceted nature of building the Information Society, effective cooperation among governments, private sector, civil society and the United Nations and other international organizations, according to their different roles and responsibilities and leveraging on their expertise, is essential. 84. Governments and other stakeholders should identify those areas where further effort and resources are required, and jointly identify, and where appropriate develop, implementation strategies, mechanisms and processes for WSIS outcomes at international, regional, national and local levels, paying particular attention to people and groups that are still marginalized in their access to, and utilization of, ICTs.
  • Indeed, the ITU’s current vision of Internet policy-making is less one of distributed decision-making, and more one of ‘taking control.’ For example, in an interview conducted last June with ITU Secretary General Hamadoun Touré, Russian Prime Minister Vladimir Putin raised the suggestion that the union might take control of the Internet: “We are thankful to you for the ideas that you have proposed for discussion,” Putin told Touré in that conversation. “One of them is establishing international control over the Internet using the monitoring and supervisory capabilities of the International Telecommunication Union (ITU).” Perhaps of greater concern are views espoused by the ITU regarding the nature of the Internet. Yesterday, at the World Summit of Information Society Forum, Mr. Alexander Ntoko, head of the Corporate Strategy Division of the ITU, explained the proposals made during the preparatory process for the WCIT, outlining a broad set of topics that can seriously impact people's rights. The categories include "security," "interoperability" and "quality of services," and the possibility that ITU recommendations and regulations will be not only binding on the world’s nations, but enforced.
  • Rights to online expression are unlikely to fare much better than privacy under an ITU model. During last year’s IGF in Kenya, a voluntary code of conduct was issued to further restrict free expression online. A group of nations (including China, the Russian Federation, Tajikistan and Uzbekistan) released a Resolution for the UN General Assembly titled, “International Code of Conduct for Information Security.”  The Code seems to be designed to preserve and protect national powers in information and communication. In it, governments pledge to curb “the dissemination of information that incites terrorism, secessionism or extremism or that undermines other countries’ political, economic and social stability, as well as their spiritual and cultural environment.” This overly broad provision accords any state the right to censor or block international communications, for almost any reason.
  • EFF Joins Coalition Denouncing Secretive WCIT Planning Process June 2012 Congressional Witnesses Agree: Multistakeholder Processes Are Right for Internet Regulation June 2012 Widespread Participation Is Key in Internet Governance July 2012 Blogging ITU: Internet Users Will Be Ignored Again if Flawed ITU Proposals Gain Traction June 2012 Global Telecom Governance Debated at European Parliament Workshop
Gonzalo San Gil, PhD.

What is open government? | opensource.com - 0 views

  •  
    "In general terms, an open government is one with high levels of transparency and mechanisms for public scrutiny and oversight in place, with an emphasis on government accountability. Transparency is considered the traditional hallmark of an open government, meaning that the public should have access to government-held information and be informed of government proceedings. In recent years, however, the definition of open government has expanded to include expectations for increased citizen participation & collaboration in government proceedings through the use of modern, open technologies."
  •  
    "In general terms, an open government is one with high levels of transparency and mechanisms for public scrutiny and oversight in place, with an emphasis on government accountability. Transparency is considered the traditional hallmark of an open government, meaning that the public should have access to government-held information and be informed of government proceedings. In recent years, however, the definition of open government has expanded to include expectations for increased citizen participation & collaboration in government proceedings through the use of modern, open technologies."
Gary Edwards

Should you buy enterprise applications from a startup? - 0 views

  • The biggest advantage of startups, in Mueller's opinion? "They have no technical historical burden, and they don't care about many technical dependencies. They deliver easy-to-use technology with relatively simple but powerful integration options."
  • "The model we've used to buy on-premises software for 20-plus years is shifting," insists Laping. "There are new ways of selecting and vetting partners."
  • Part of that shift is simple: The business side sees what technology can do, and it's banging on IT's door, demanding ... what? Not new drop-down menus in the same-old ERP application, but rather state-of-the-art, cutting-edge, ain't-that-cool innovation. The landscape is wide open: Innovation can come in the form of new technologies, such as the Internet of Things, or from mobility, the cloud, virtualization -- in fact, from anywhere an enterprise vendor isn't filling a need. The easiest place to find that? Startups.
  • ...5 more annotations...
  • "The number one reason to consider a startup is that the current landscape of Magic Quadrant vendors is not serving a critical need. That's a problem."
  • Ravi Belani is managing partner at Alchemist Accelerator, a Palo Alto, Calif.-based venture-backed initiative focused on accelerating startups whose revenue comes from enterprises rather than consumers. He says, "The innovation that used to come out of big software houses isn't there anymore, while the pace of innovation in technology is accelerating."
  • He acknowledges that there has been a longtime concern with startups about the ability of their applications to scale, but given startups' ability to build their software on robust infrastructure platforms using IaaS or PaaS, and then deploy them via SaaS, "scalability isn't as big a deal as it used it be. It costs $50,000 today to do what you needed $50 million to do ten years ago. That means it takes less capital today to create the same innovation. Ten years ago, that was a moat, a barrier to entry, but software vendors don't own that moat anymore."
  • he confluence of offshore programming, open source technologies and cloud-based infrastructures has significantly lowered the barriers to entry of launching a new venture -- not to mention all those newly minted tech millionaires willing to be angel investors.
  • "In the new paradigm, [most software] implementations are so much shorter, you don't have to think about that risk. You're not talking about three years and $20 million. You're talking about 75 days and $50,000. You implement little modules and get big wins along the way."
  •  
    "The idea of buying an enterprise application from a startup company might sound like anathema to a CIO. But Chris Laping, CIO of restaurant chain Red Robin, based in Greenwood Village, Colo., disagrees. He believes we're in the middle of a significant shift that favors startups -- moving from huge applications with extensive features to task-based activities, inspired by the apps running on mobile devices. Featured Resource Presented by Scribe Software 10 Best Practices for Integrating Data Data integration is often underestimated and poorly implemented, taking time and resources. Yet it Learn More Mirco Mueller concurs. He is an IT architect for St. Gallen, Switzerland-based Helvetia Swiss Life Insurance Co., which -- having been founded in 1858 -- is about as far from a startup as possible. He recently chose a SaaS tool from an unnamed startup over what he calls "a much more powerful but much more complex alternative. Its list of features is shorter than the feature list of the big companies, but in terms of agility, flexibility, ease of use and adjustable business model, it beat" all of its competitors. The biggest advantage of startups, in Mueller's opinion? "They have no technical historical burden, and they don't care about many technical dependencies. They deliver easy-to-use technology with relatively simple but powerful integration options." There's certainly no lack of applications available from new players. At a recent conference focusing on innovation, Microsoft Ventures principal Daniel Sumner noted that every month for the last 88 months, there's been a $1 billion valuation for one startup or another. That's seven years and counting. But as Silicon Valley skeptics like to point out, those are the ones you hear about. For every successful startup, there are at least three that fail, according to 2012 research by Harvard Business School professor Shikhar Ghosh. So why, then, would CIOs in their right mind take the risk of buying enterprise applic
Paul Merrell

WASHINGTON: CIA admits it broke into Senate computers; senators call for spy chief's ouster | National Security & Defense | McClatchy DC - 0 views

  • An internal CIA investigation confirmed allegations that agency personnel improperly intruded into a protected database used by Senate Intelligence Committee staff to compile a scathing report on the agency’s detention And interrogation program, prompting bipartisAn outrage And at least two calls for spy chief John BrennAn to resign.“This is very, very serious, And I will tell you, as a member of the committee, someone who has great respect for the CIA, I am extremely disappointed in the actions of the agents of the CIA who carried out this breach of the committee’s computers,” said Sen. Saxby Chambliss, R-Ga., the committee’s vice chairmAn.
  • The rare display of bipartisan fury followed a three-hour private briefing by Inspector General David Buckley. His investigation revealed that five CIA employees, two lawyers and three information technology specialists improperly accessed or “caused access” to a database that only committee staff were permitted to use.Buckley’s inquiry also determined that a CIA crimes report to the Justice Department alleging that the panel staff removed classified documents from a top-secret facility without authorization was based on “inaccurate information,” according to a summary of the findings prepared for the Senate and House intelligence committees and released by the CIA.In other conclusions, Buckley found that CIA security officers conducted keyword searches of the emails of staffers of the committee’s Democratic majority _ and reviewed some of them _ and that the three CIA information technology specialists showed “a lack of candor” in interviews with Buckley’s office.
  • The inspector general’s summary did not say who may have ordered the intrusion or when senior CIA officials learned of it.Following the briefing, some senators struggled to maintain their composure over what they saw as a violation of the constitutional separation of powers between an executive branch agency and its congressional overseers.“We’re the only people watching these organizations, and if we can’t rely on the information that we’re given as being accurate, then it makes a mockery of the entire oversight function,” said Sen. angus King, an independent from Maine who caucuses with the Democrats.The findings confirmed charges by the committee chairwoman, Sen. Dianne Feinstein, D-Calif., that the CIA intruded into the database that by agreement was to be used by her staffers compiling the report on the harsh interrogation methods used by the agency on suspected terrorists held in secret overseas prisons under the George W. Bush administration.The findings also contradicted Brennan’s denials of Feinstein’s allegations, prompting two panel members, Sens. Mark Udall, D-Colo., and Martin Heinrich, D-N.M., to demand that the spy chief resign.
  • ...7 more annotations...
  • Another committee member, Sen. Ron Wyden, D-Ore., And some civil rights groups called for a fuller investigation. The demAnds clashed with a desire by President Barack Obama, other lawmakers And the CIA to move beyond the controversy over the “enhAnced interrogation program” after Feinstein releases her committee’s report, which could come as soon as next weekMAny members demAnded that BrennAn explain his earlier denial that the CIA had accessed the Senate committee database.“Director BrennAn should make a very public explAnation And correction of what he said,” said Sen. Carl Levin, D-Mich. He all but accused the Justice Department of a coverup by deciding not to pursue a criminal investigation into the CIA’s intrusion.
  • “I thought there might have been information that was produced after the department reached their conclusion,” he said. “What I understand, they have all of the information which the IG has.”He hinted that the scandal goes further than the individuals cited in Buckley’s report.“I think it’s very clear that CIA people knew exactly what they were doing and either knew or should’ve known,” said Levin, adding that he thought that Buckley’s findings should be referred to the Justice Department.A person with knowledge of the issue insisted that the CIA personnel who improperly accessed the database “acted in good faith,” believing that they were empowered to do so because they believed there had been a security violation.“There was no malicious intent. They acted in good faith believing they had the legal standing to do so,” said the knowledgeable person, who asked not to be further identified because they weren’t authorized to discuss the issue publicly. “But it did not conform with the legal agreement reached with the Senate committee.”
  • Feinstein called Brennan’s apology and his decision to submit Buckley’s findings to the accountability board “positive first steps.”“This IG report corrects the record and it is my understanding that a declassified report will be made available to the public shortly,” she said in a statement.“The investigation confirmed what I said on the Senate floor in March _ CIA personnel inappropriately searched Senate Intelligence Committee computers in violation of an agreement we had reached, and I believe in violation of the constitutional separation of powers,” she said.It was not clear why Feinstein didn’t repeat her charges from March that the agency also may have broken the law and had sought to “thwart” her investigation into the CIA’s use of waterboarding, which simulates drowning, sleep deprivation and other harsh interrogation methods _ tactics denounced by many experts as torture.
  • Buckley’s findings clashed with denials by Brennan that he issued only hours after Feinstein’s blistering Senate speech.“As far as the allegations of, you know, CIA hacking into, you know, Senate computers, nothing could be further from the truth. I mean, we wouldn’t do that. I mean, that’s _ that’s just beyond the _ you know, the scope of reason in terms of what we would do,” he said in an appearance at the Council on Foreign Relations.White House Press Secretary Josh Earnest issued a strong defense of Brennan, crediting him with playing an “instrumental role” in the administration’s fight against terrorism, in launching Buckley’s investigation and in looking for ways to prevent such occurrences in the future.Earnest was asked at a news briefing whether there was a credibility issue for Brennan, given his forceful denial in March.“Not at all,” he replied, adding that Brennan had suggested the inspector general’s investigation in the first place. and, he added, Brennan had taken the further step of appointing the accountability board to review the situation and the conduct of those accused of acting improperly to “ensure that they are properly held accountable for that conduct.”
  • The allegations and the separate CIA charge that the committee staff removed classified documents from the secret CIA facility in Northern Virginia without authorization were referred to the Justice Department for investigation.The department earlier this month announced that it had found insufficient evidence on which to proceed with criminal probes into either matter “at this time.” Thursday, Justice Department officials declined comment.
  • In her speech, Feinstein asserted that her staff found the material _ known as the Panetta review, after former CIA Director Leon Panetta, who ordered it _ in the protected database and that the CIA discovered the staff had it by monitoring its computers in violation of the user agreement.The inspector general’s summary, which was prepared for the Senate and the House intelligence committees, didn’t identify the CIA personnel who had accessed the Senate’s protected database.Furthermore, it said, the CIA crimes report to the Justice Department alleging that panel staffers had removed classified materials without permission was grounded on inaccurate information. The report is believed to have been sent by the CIA’s then acting general counsel, Robert Eatinger, who was a legal adviser to the interrogation program.“The factual basis for the referral was not supported, as the author of the referral had been provided inaccurate information on which the letter was based,” said the summary, noting that the Justice Department decided not to pursue the issue.
  • Christopher Anders, senior legislative counsel with the AmericAn Civil Liberties Union, criticized the CIA Announcement, saying that “An apology isn’t enough.”“The Justice Department must refer the (CIA) inspector general’s report to a federal prosecutor for a full investigation into Any crimes by CIA personnel or contractors,” said Anders.
  •  
    And no one but the lowest rAnking staffer knew Anything about it, not even the CIA lawyer who made the criminal referral to the Justice Dept., alleging that the Senate Intelligence Committee had accessed classified documents it wasn't authorized to access. So the Justice Dept. Announces that there's insufficient evidence to warrAnt a criminal investigation. As though the CIA lawyer's allegations were not based on the unlawful surveillAnce of the Senate Intelligence Committee's network.  CAn't we just get An official Announcement that Attorney General Holder has decided that there shall be a cover-up? 
1 - 20 of 1053 Next › Last »
Showing 20 items per page