Skip to main content

Home/ Future of the Web/ Group items tagged default

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

XKeyscore Exposé Reaffirms the Need to Rid the Web of Tracking Cookies | Elec... - 0 views

  • The Intercept published an expose on the NSA's XKeyscore program. Along with information on the breadth and scale of the NSA's metadata collection, The Intercept revealed how the NSA relies on unencrypted cookie data to identify users. As The Intercept says: "The NSA’s ability to piggyback off of private companies’ tracking of their own users is a vital instrument that allows the agency to trace the data it collects to individual users. It makes no difference if visitors switch to public Wi-Fi networks or connect to VPNs to change their IP addresses: the tracking cookie will follow them around as long as they are using the same web browser and fail to clear their cookies." The NSA slides released by The Intercept give detailed guides to understanding the data transmitted by these cookies, as well as how to find unique machine identifiers that analysts can use to differentiate between multiple machines using the same IP address. We've written before about how spy agencies piggyback on social media account data to find Internet users' names or other identifying info, and these slides drive home the point that HTTP cookies leave users vulnerable to government surveillance, since any intermediary (or spy agency) can read the sensitive data they contain.
  • Worse yet, most of the time these identifying cookies come from third-party sources on webpages, and users have no meaningful way to opt out of receiving them (short of blocking all third party cookies) since advertisers (the main server of these types of cookies) refuse to honor the Do Not Track header.  Browser makers could help address this sort of non-consensual tracking by both advertisers and the NSA with some simple technical changes—changes that have been shown to reduce the number of third party cookies received by 67%. So far, though, they've been unwilling to build privacy protecting features in by default. Until they do, the best way for users to protect themselves is by installing a privacy protecting app like Privacy Badger, which is designed to block these types of uniquely identifying tracking cookies, or HTTPS Everywhere to block the transmission of HTTP cookies.
Gonzalo San Gil, PhD.

Zero Day Malware Detection/Prevention Using Open Source Software - 0 views

  •  
    "Zero Day Malware Detection/Prevention Using Open Source Software - Proof of Concept Fathi "
Paul Merrell

Popular Security Software Came Under Relentless NSA and GCHQ Attacks - The Intercept - 0 views

  • The National Security Agency and its British counterpart, Government Communications Headquarters, have worked to subvert anti-virus and other security software in order to track users and infiltrate networks, according to documents from NSA whistleblower Edward Snowden. The spy agencies have reverse engineered software products, sometimes under questionable legal authority, and monitored web and email traffic in order to discreetly thwart anti-virus software and obtain intelligence from companies about security software and users of such software. One security software maker repeatedly singled out in the documents is Moscow-based Kaspersky Lab, which has a holding registered in the U.K., claims more than 270,000 corporate clients, and says it protects more than 400 million people with its products. British spies aimed to thwart Kaspersky software in part through a technique known as software reverse engineering, or SRE, according to a top-secret warrant renewal request. The NSA has also studied Kaspersky Lab’s software for weaknesses, obtaining sensitive customer information by monitoring communications between the software and Kaspersky servers, according to a draft top-secret report. The U.S. spy agency also appears to have examined emails inbound to security software companies flagging new viruses and vulnerabilities.
  • The efforts to compromise security software were of particular importance because such software is relied upon to defend against an array of digital threats and is typically more trusted by the operating system than other applications, running with elevated privileges that allow more vectors for surveillance and attack. Spy agencies seem to be engaged in a digital game of cat and mouse with anti-virus software companies; the U.S. and U.K. have aggressively probed for weaknesses in software deployed by the companies, which have themselves exposed sophisticated state-sponsored malware.
  • The requested warrant, provided under Section 5 of the U.K.’s 1994 Intelligence Services Act, must be renewed by a government minister every six months. The document published today is a renewal request for a warrant valid from July 7, 2008 until January 7, 2009. The request seeks authorization for GCHQ activities that “involve modifying commercially available software to enable interception, decryption and other related tasks, or ‘reverse engineering’ software.”
  • ...9 more annotations...
  • The NSA, like GCHQ, has studied Kaspersky Lab’s software for weaknesses. In 2008, an NSA research team discovered that Kaspersky software was transmitting sensitive user information back to the company’s servers, which could easily be intercepted and employed to track users, according to a draft of a top-secret report. The information was embedded in “User-Agent” strings included in the headers of Hypertext Transfer Protocol, or HTTP, requests. Such headers are typically sent at the beginning of a web request to identify the type of software and computer issuing the request.
  • According to the draft report, NSA researchers found that the strings could be used to uniquely identify the computing devices belonging to Kaspersky customers. They determined that “Kaspersky User-Agent strings contain encoded versions of the Kaspersky serial numbers and that part of the User-Agent string can be used as a machine identifier.” They also noted that the “User-Agent” strings may contain “information about services contracted for or configurations.” Such data could be used to passively track a computer to determine if a target is running Kaspersky software and thus potentially susceptible to a particular attack without risking detection.
  • Another way the NSA targets foreign anti-virus companies appears to be to monitor their email traffic for reports of new vulnerabilities and malware. A 2010 presentation on “Project CAMBERDADA” shows the content of an email flagging a malware file, which was sent to various anti-virus companies by François Picard of the Montréal-based consulting and web hosting company NewRoma. The presentation of the email suggests that the NSA is reading such messages to discover new flaws in anti-virus software. Picard, contacted by The Intercept, was unaware his email had fallen into the hands of the NSA. He said that he regularly sends out notification of new viruses and malware to anti-virus companies, and that he likely sent the email in question to at least two dozen such outfits. He also said he never sends such notifications to government agencies. “It is strange the NSA would show an email like mine in a presentation,” he added.
  • The NSA presentation goes on to state that its signals intelligence yields about 10 new “potentially malicious files per day for malware triage.” This is a tiny fraction of the hostile software that is processed. Kaspersky says it detects 325,000 new malicious files every day, and an internal GCHQ document indicates that its own system “collect[s] around 100,000,000 malware events per day.” After obtaining the files, the NSA analysts “[c]heck Kaspersky AV to see if they continue to let any of these virus files through their Anti-Virus product.” The NSA’s Tailored Access Operations unit “can repurpose the malware,” presumably before the anti-virus software has been updated to defend against the threat.
  • The Project CAMBERDADA presentation lists 23 additional AV companies from all over the world under “More Targets!” Those companies include Check Point software, a pioneering maker of corporate firewalls based Israel, whose government is a U.S. ally. Notably omitted are the American anti-virus brands McAfee and Symantec and the British company Sophos.
  • As government spies have sought to evade anti-virus software, the anti-virus firms themselves have exposed malware created by government spies. Among them, Kaspersky appears to be the sharpest thorn in the side of government hackers. In the past few years, the company has proven to be a prolific hunter of state-sponsored malware, playing a role in the discovery and/or analysis of various pieces of malware reportedly linked to government hackers, including the superviruses Flame, which Kaspersky flagged in 2012; Gauss, also detected in 2012; Stuxnet, discovered by another company in 2010; and Regin, revealed by Symantec. In February, the Russian firm announced its biggest find yet: the “Equation Group,” an organization that has deployed espionage tools widely believed to have been created by the NSA and hidden on hard drives from leading brands, according to Kaspersky. In a report, the company called it “the most advanced threat actor we have seen” and “probably one of the most sophisticated cyber attack groups in the world.”
  • Hacks deployed by the Equation Group operated undetected for as long as 14 to 19 years, burrowing into the hard drive firmware of sensitive computer systems around the world, according to Kaspersky. Governments, militaries, technology companies, nuclear research centers, media outlets and financial institutions in 30 countries were among those reportedly infected. Kaspersky estimates that the Equation Group could have implants in tens of thousands of computers, but documents published last year by The Intercept suggest the NSA was scaling up their implant capabilities to potentially infect millions of computers with malware. Kaspersky’s adversarial relationship with Western intelligence services is sometimes framed in more sinister terms; the firm has been accused of working too closely with the Russian intelligence service FSB. That accusation is partly due to the company’s apparent success in uncovering NSA malware, and partly due to the fact that its founder, Eugene Kaspersky, was educated by a KGB-backed school in the 1980s before working for the Russian military.
  • Kaspersky has repeatedly denied the insinuations and accusations. In a recent blog post, responding to a Bloomberg article, he complained that his company was being subjected to “sensationalist … conspiracy theories,” sarcastically noting that “for some reason they forgot our reports” on an array of malware that trace back to Russian developers. He continued, “It’s very hard for a company with Russian roots to become successful in the U.S., European and other markets. Nobody trusts us — by default.”
  • Documents published with this article: Kaspersky User-Agent Strings — NSA Project CAMBERDADA — NSA NDIST — GCHQ’s Developing Cyber Defence Mission GCHQ Application for Renewal of Warrant GPW/1160 Software Reverse Engineering — GCHQ Reverse Engineering — GCHQ Wiki Malware Analysis & Reverse Engineering — ACNO Skill Levels — GCHQ
Paul Merrell

Google Chrome Listening In To Your Room Shows The Importance Of Privacy Defense In Depth - 0 views

  • Yesterday, news broke that Google has been stealth downloading audio listeners onto every computer that runs Chrome, and transmits audio data back to Google. Effectively, this means that Google had taken itself the right to listen to every conversation in every room that runs Chrome somewhere, without any kind of consent from the people eavesdropped on. In official statements, Google shrugged off the practice with what amounts to “we can do that”.It looked like just another bug report. "When I start Chromium, it downloads something." Followed by strange status information that notably included the lines "Microphone: Yes" and "Audio Capture Allowed: Yes".
  • Without consent, Google’s code had downloaded a black box of code that – according to itself – had turned on the microphone and was actively listening to your room.A brief explanation of the Open-source / Free-software philosophy is needed here. When you’re installing a version of GNU/Linux like Debian or Ubuntu onto a fresh computer, thousands of really smart people have analyzed every line of human-readable source code before that operating system was built into computer-executable binary code, to make it common and open knowledge what the machine actually does instead of trusting corporate statements on what it’s supposed to be doing. Therefore, you don’t install black boxes onto a Debian or Ubuntu system; you use software repositories that have gone through this source-code audit-then-build process. Maintainers of operating systems like Debian and Ubuntu use many so-called “upstreams” of source code to build the final product.Chromium, the open-source version of Google Chrome, had abused its position as trusted upstream to insert lines of source code that bypassed this audit-then-build process, and which downloaded and installed a black box of unverifiable executable code directly onto computers, essentially rendering them compromised. We don’t know and can’t know what this black box does. But we see reports that the microphone has been activated, and that Chromium considers audio capture permitted.
  • This was supposedly to enable the “Ok, Google” behavior – that when you say certain words, a search function is activated. Certainly a useful feature. Certainly something that enables eavesdropping of every conversation in the entire room, too.Obviously, your own computer isn’t the one to analyze the actual search command. Google’s servers do. Which means that your computer had been stealth configured to send what was being said in your room to somebody else, to a private company in another country, without your consent or knowledge, an audio transmission triggered by… an unknown and unverifiable set of conditions.Google had two responses to this. The first was to introduce a practically-undocumented switch to opt out of this behavior, which is not a fix: the default install will still wiretap your room without your consent, unless you opt out, and more importantly, know that you need to opt out, which is nowhere a reasonable requirement. But the second was more of an official statement following technical discussions on Hacker News and other places. That official statement amounted to three parts (paraphrased, of course):
  • ...4 more annotations...
  • 1) Yes, we’re downloading and installing a wiretapping black-box to your computer. But we’re not actually activating it. We did take advantage of our position as trusted upstream to stealth-insert code into open-source software that installed this black box onto millions of computers, but we would never abuse the same trust in the same way to insert code that activates the eavesdropping-blackbox we already downloaded and installed onto your computer without your consent or knowledge. You can look at the code as it looks right now to see that the code doesn’t do this right now.2) Yes, Chromium is bypassing the entire source code auditing process by downloading a pre-built black box onto people’s computers. But that’s not something we care about, really. We’re concerned with building Google Chrome, the product from Google. As part of that, we provide the source code for others to package if they like. Anybody who uses our code for their own purpose takes responsibility for it. When this happens in a Debian installation, it is not Google Chrome’s behavior, this is Debian Chromium’s behavior. It’s Debian’s responsibility entirely.3) Yes, we deliberately hid this listening module from the users, but that’s because we consider this behavior to be part of the basic Google Chrome experience. We don’t want to show all modules that we install ourselves.
  • If you think this is an excusable and responsible statement, raise your hand now.Now, it should be noted that this was Chromium, the open-source version of Chrome. If somebody downloads the Google product Google Chrome, as in the prepackaged binary, you don’t even get a theoretical choice. You’re already downloading a black box from a vendor. In Google Chrome, this is all included from the start.This episode highlights the need for hard, not soft, switches to all devices – webcams, microphones – that can be used for surveillance. A software on/off switch for a webcam is no longer enough, a hard shield in front of the lens is required. A software on/off switch for a microphone is no longer enough, a physical switch that breaks its electrical connection is required. That’s how you defend against this in depth.
  • Of course, people were quick to downplay the alarm. “It only listens when you say ‘Ok, Google’.” (Ok, so how does it know to start listening just before I’m about to say ‘Ok, Google?’) “It’s no big deal.” (A company stealth installs an audio listener that listens to every room in the world it can, and transmits audio data to the mothership when it encounters an unknown, possibly individually tailored, list of keywords – and it’s no big deal!?) “You can opt out. It’s in the Terms of Service.” (No. Just no. This is not something that is the slightest amount of permissible just because it’s hidden in legalese.) “It’s opt-in. It won’t really listen unless you check that box.” (Perhaps. We don’t know, Google just downloaded a black box onto my computer. And it may not be the same black box as was downloaded onto yours. )Early last decade, privacy activists practically yelled and screamed that the NSA’s taps of various points of the Internet and telecom networks had the technical potential for enormous abuse against privacy. Everybody else dismissed those points as basically tinfoilhattery – until the Snowden files came out, and it was revealed that precisely everybody involved had abused their technical capability for invasion of privacy as far as was possible.Perhaps it would be wise to not repeat that exact mistake. Nobody, and I really mean nobody, is to be trusted with a technical capability to listen to every room in the world, with listening profiles customizable at the identified-individual level, on the mere basis of “trust us”.
  • Privacy remains your own responsibility.
  •  
    And of course, Google would never succumb to a subpoena requiring it to turn over the audio stream to the NSA. The Tor Browser just keeps looking better and better. https://www.torproject.org/projects/torbrowser.html.en
Gary Edwards

Microsoft Office Sharepoint Server: a next generation of deeper, wider content silos? |... - 0 views

  • Some of the next generation collaboration platforms are succeeding precisely because they are silo bunker busters. The problem of dozens of digital filing cabinets full of thousands of iterations of hard to find documents within enterprise environments is arguably being solved by the new generation of nimble project contextual tools. Taxonomies and tagging, threaded discussion, wikis and other ‘Enterprise 2.0‘ tools are an alternative solution to the problem of generating mountains of hard to find silo’d information and associated email. Microsoft have a fabulously lucrative franchise with their Office suite of Word, PowerPoint, Excel et al desktop products. A huge issue in the enterprise space is blizzards of email containing links to documents created with these products on shared drives, or iterations actually attached to the mail messages. Add mobile users on laptops with intermittent connection and you also have serious synch headaches.
  •  
    Discussion about the impact SharePoint is having: based on Boston 2008 Enterprise 2.0 Conference
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Paul Merrell

W3C releases Working Draft for Widgets 1.0: APIs and Events - 0 views

  • This specification defines a set of APIs and events for the Widgets 1.0 Family of Specifications that enable baseline functionality for widgets. The APIs and Events defined by this specification defines, amongst other things, the means to:access the metadata declared in a widget's configuration document, receive events related to changes in the view state of a widget, determine the locale under which a widget is currently running, be notified of events relating to the widget being updated, invoke a widget to open a URL on the system's default browser, requests the user's attention in a device independent manner, and check if any additional APIs requested via the configuration document's feature element have successfully loaded.
  • This specification defines a set of APIs and events for widgets that enable baseline functionality for widgets. Widgets are full-fledged client-side applications that are authored using Web standards. They are typically downloaded and installed on a client machine or device where they typically run as stand-alone applications outside of a Web browser. Examples range from simple clocks, stock tickers, news casters, games and weather forecasters, to complex applications that pull data from multiple sources to be "mashed-up" and presented to a user in some interesting and useful way
  • This specification is part of the Widgets 1.0 family of specifications, which together standardize widgets as a whole. The Widgets 1.0: Packaging and Configuration [Widgets-Packaging] standardizes a Zip-based packaging format, an XML-based configuration document format and a series of steps that user agents follow when processing and verifying various aspects of widgets. The Widgets 1.0: Digital Signature [Widgets-DigSig] specification defines a means for widgets to be digitally signed using a custom profile of the XML-Signature Syntax and Processing Specification. The Widgets: 1.0: Automatic Updates [Widgets-Updates] specification defines a version control model that allows widgets to be kept up-to-date over [HTTP].
Paul Merrell

Introducing the Open XML Format External File Converter for 2007 Microsoft Office Syste... - 0 views

  • In other words, revising the Open XML Format converter interfaces by adding new functionality does not require any recompilation of existing clients. This guarantees backward compatibility as these converter interfaces are upgraded.
    • Paul Merrell
       
      But what does it do for forward compatibility? OOXML is a moving interoperabillity target.
  • In addition to allowing converters to override external file formats, the applications allow converters to override OpenDocument Format-related formats (such as .odt). For example, if you specify a converter to be the default converter for .odt, Word 2007 SP2 invokes the specified converter whenever a user tries to open an .odt file from the Windows Shell instead of going through the native load path for Word 2007 SP2.
    • Paul Merrell
       
      How wonderful. Developers can bypass the forthcoming Microsoft native file support for ODF. Perhaps to convert Excel formulas to OpenForumla?
  • Open XML Format converters for Word 2007 SP2, Excel 2007 SP2, or PowerPoint 2007 SP2 are implemented as out-of-process COM servers. Out-of-process converters have the benefit of running in their own process space, which means issues or crashes within converters do not affect the application process space. In addition, out-of-process 32-bit converters can function on 64-bit operating systems in Microsoft Windows on Windows 64-bit (WoW64) mode without the need for converters to be compiled in 64-bit.
    • Paul Merrell
       
      Pretty lame excuses for not documenting the native file support APIs. I.e., the native file supoort APIs already throw "can't open file" error messages for problematic documents without crashing the app. The bit about not needing to recompile converters for 64-bit Windoze is a complete red herring. This is only a benefit if one requires conversion in an external process. It wouldn't be an issue if the native file support APIs were documented and their intermediate formats were the interop targets.
    • Paul Merrell
       
      I.e., one need not recompile the Office app if a supported native format is added. The OpenDocument Foundation and Sun plug-ins for MS Office proved that.
  • ...3 more annotations...
  • To begin developing a converter, you should familiarize yourself with the Open XML standard. For more information, see: Standard ECMA-376: Office Open XML File Formats.
    • Paul Merrell
       
      Note that they specify Ecma 376 rather than ISO/IEC:29500-2008 Office Open XML. So you get to rewrite your converters when Microsoft adds support for the official standard in the next major release of Office.
  • External files are imported into Word 2007 SP2, Excel 2007 SP2, or PowerPoint 2007 SP2 by converting the external file to Open XML Formats. External files are exported from Word 2007 SP2, Excel 2007 SP2, or PowerPoint by converting Open XML Formats to external files. The success of either the import or export conversion depends upon the accurate generation and interpretation of Open XML Formats by the converter.
    • Paul Merrell
       
      Note that this is a process external to the native file support APIs and their intermediate formats. The real APIs apparently will remain obfuscated. Thiis forces others to develop support for Ecma 376 rather than working directly with the native file support APIs. In other words, more incentives for others to target the moving target OOXML rather than the more stable intermediate formats.
  • Summary: Get the details about the interfaces that you need to use to create an Open XML Format External File Converter for the 2007 Microsoft Office system Service Pack 2 (SP2). (16 Printed Pages)
Paul Merrell

IDABC - Revision of the EIF and AG - 0 views

  • In 2006, the European Commission has started the revision of the European Interoperability Framework (EIF) and the Architecture Guidelines (AG).
  • The European Commission has started drafting the EIF v2.0 in close cooperation with the concerned Commission services and with the Members States as well as with the Candidate Countries and EEA Countries as observers.
  • A draft document from which the final EIF V2.0 will be elaborated was available for external comments till the 22nd September. The proposal for the new EIF v2.0 that has been subject to consultation, is available: [3508 Kb]
  •  
    This planning document forms the basis for the forthcoming work to develop European Interoperability Framework v. 2.0. It is the overview of things to come, so to speak. Well worth the read to see how SOA concepts are evolving at the bleeding edge. But also noteworthy for the faceted expansion in the definition of "interoperability," which now includes: [i] political context; [ii] legal interop; [iii] organizational interop; [iv] semantic interop; and [v] technical interop. A lot of people talk the interop talk; this is a document from people who are walking the interop walk, striving to bring order out of the chaos of incompatible ICT systems across the E.U.
  •  
    Full disclosure: I submitted detailed comments on the draft of the subject document on behalf of the Universal Interoperability Council. One theme of my comments was embraced in this document: the document recognizes human-machine interactions as a facet of interoperability, moving accessibility and usability from sideshow treatment in the draft to part of the technical interop dimension of the plan.
Paul Merrell

Intel Could Face Civil Charges in Europe - PC World - 0 views

  • But Intel could face even more payouts if Intel competitors, such as AMD, take civil cases on the back of the Commission's regulatory action, according to Alan Davis, an expert in competition law at Pinsent Masons, the law firm behind OUT-LAW.COM. "This will open the floodgates for competitors to sue," said Davis. "There was a complainant in this case, AMD [Advanced Micro Devices], and without question they and other competitors will pursue a case for damages." "The fine goes to the European Commission's coffers, not to the competitors who suffered damage to their businesses because of Intel's anti-competitive practices," he said. "What is likely to happen is that action will be started and a massive settlement will be made."
Paul Merrell

Microsoft offers free repository for agency data -- Government Computer News - 0 views

  • Microsoft has set up a repository in which government agencies may upload and store their public-facing datasets so that they can be reused by other parties. Agency developers can upload their data to this repository, called the Open Government Data Initiative (OGDI), through Microsoft's Azure, the company's cloud-computing offering.
  • Since taking the role of federal chief information officer, Vivek Kundra has urged agencies to make more of their data open to the public in easy-to-use formats. To this end, the General Services Administration, on behalf of Kundra, is setting up a repository of government feeds, to be called Data.gov. Data.gov will both serve as a repository for data and as an index for government data located elsewhere, Kundra told GCN. OGDI came about as a way to introduce Azure to the federal information technology community, said Susie Adams, Microsoft Federal chief technology officer. "The government wants to store all this data, what with Kundra talking about Data.gov. We asked if you were to use Azure as data source, [what would you need to do]?"
  • In addition to Microsoft's effort, at least one other company has volunteered to rehost government data for wider use. Amazon is offering to store public-domain datasets for users of its Elastic Compute Cloud service.
Paul Merrell

Open Government Data Initiative - 0 views

  • The Open Government Data Initiative (OGDI) is an initiative led by Microsoft Public Sector Developer Evangelism team. OGDI uses the Azure Services Platform to make it easier to publish and use a wide variety of public data from government agencies. OGDI is also a free, open source ‘starter kit’ (coming soon) with code that can be used to publish data on the Internet in a Web-friendly format with easy-to-use, open API's. OGDI-based web API’s can be accessed from a variety of client technologies such as Silverlight, Flash, JavaScript, PHP, Python, Ruby, mapping web sites, etc. Whether you are a business wishing to use government data, a government developer, or a ‘citizen developer’, these open API's will enable you to build innovative applications, visualizations and mash-ups that empower people through access to government information. This site is built using the OGDI starter kit software assets and provides interactive access to some publicly-available data sets along with sample code and resources for writing applications using the OGDI API's.
Paul Merrell

​'Hostile to privacy': Snowden urges internet users to get rid of Dropbox - R... - 0 views

  • Edward Snowden has hit out at Dropbox and other services he says are “hostile to privacy,” urging web users to abandon unencrypted communication and adjust privacy settings to prevent governments from spying on them in increasingly intrusive ways. “We are no longer citizens, we no longer have leaders. We’re subjects, and we have rulers,” Snowden told The New Yorker magazine in a comprehensive hour-long interview. There isn’t enough investment into security research, into understanding how metadata could better be protected and why that is more necessary today than yesterday, he said.
  • Edward Snowden has hit out at Dropbox and other services he says are “hostile to privacy,” urging web users to abandon unencrypted communication and adjust privacy settings to prevent governments from spying on them in increasingly intrusive ways. “We are no longer citizens, we no longer have leaders. We’re subjects, and we have rulers,” Snowden told The New Yorker magazine in a comprehensive hour-long interview. There isn’t enough investment into security research, into understanding how metadata could better be protected and why that is more necessary today than yesterday, he said.
  • The whistleblower believes one fallacy in how authorities view individual rights has to do with making the individual forsake those rights by default. Snowden’s point is that the moment you are compelled to reveal that you have nothing to hide is when the right to privacy stops being a right – because you are effectively waiving that right. “When you say, ‘I have nothing to hide,’ you’re saying, ‘I don’t care about this right.’ You’re saying, ‘I don’t have this right, because I’ve got to the point where I have to justify it.’ The way rights work is, the government has to justify its intrusion into your rights – you don’t have to justify why you need freedom of speech.” In that situation, it becomes OK to live in a world where one is no longer interested in privacy as such – a world where Facebook, Google and Dropbox have become ubiquitous, and where there are virtually no safeguards against the wrongful use of the information one puts there.
  • ...1 more annotation...
  • In particular, Snowden advised web users to “get rid” of Dropbox. Such services only insist on encrypting user data during transfer and when being stored on the servers. Other services he recommends instead, such as SpiderOak, encrypt information while it’s on your computer as well. “We're talking about dropping programs that are hostile to privacy,” Snowden said. The same goes for social networks such as Facebook and Google, too. Snowden says they are “dangerous” and proposes that people use other services that allow for encrypted messages to be sent, such as RedPhone or SilentCircle.
Paul Merrell

Protect your synced data - Chrome Help - 0 views

  • When you sign in to Chrome and enable sync, Chrome keeps your information secure by using your Google Account credentials to encrypt your synced passwords. Alternatively, you can choose to encrypt all of your synced data with a sync passphrase. This sync passphrase is stored on your computer and isn't sent to Google.
  • Click the Chrome menu on the browser toolbar. Select Signed in as <your email address> (you must be signed in to Chrome already). In the "Sign in" section, click Advanced sync settings. Choose an encryption option: Encrypt synced passwords with your Google credentials: This is the default option. Your saved passwords are encrypted on Google's servers and protected with your Google Account credentials. Encrypt all synced data with your own sync passphrase: Select this if you'd like to encrypt all the data you've chosen to sync. You can provide your own passphrase that will only be stored on your computer. Click OK.
  •  
    Just installed Google Chrome on a new system. When I went into settings to set my syncronization preferences, I discovered a new setting I never noticed before for synchronization. I suspect it's new and one Google reaction to the NSA scandal. End to end encryption with a local password that isn't sent to Google. If you're using Chrome, here's an easy way to help the Web fight back to NSA voyeurs.  
Paul Merrell

FBI Flouts Obama Directive to Limit Gag Orders on National Security Letters - The Inter... - 0 views

  • Despite the post-Snowden spotlight on mass surveillance, the intelligence community’s easiest end-run around the Fourth Amendment since 2001 has been something called a National Security Letter. FBI agents can demand that an Internet service provider, telephone company or financial institution turn over its records on any number of people — without any judicial review whatsoever — simply by writing a letter that says the information is needed for national security purposes. The FBI at one point was cranking out over 50,000 such letters a year; by the latest count, it still issues about 60 a day. The letters look like this:
  • Recipients are legally required to comply — but it doesn’t stop there. They also aren’t allowed to mention the order to anyone, least of all the person whose data is being searched. Ever. That’s because National Security Letters almost always come with eternal gag orders. Here’s that part:
  • That means the NSL process utterly disregards the First Amendment as well. More than a year ago, President Obama announced that he was ordering the Justice Department to terminate gag orders “within a fixed time unless the government demonstrates a real need for further secrecy.” And on Feb. 3, when the Office of the Director of National Intelligence announced a handful of baby steps resulting from its “comprehensive effort to examine and enhance [its] privacy and civil liberty protections” one of the most concrete was — finally — to cap the gag orders: In response to the President’s new direction, the FBI will now presumptively terminate National Security Letter nondisclosure orders at the earlier of three years after the opening of a fully predicated investigation or the investigation’s close. Continued nondisclosures orders beyond this period are permitted only if a Special Agent in Charge or a Deputy Assistant Director determines that the statutory standards for nondisclosure continue to be satisfied and that the case agent has justified, in writing, why continued nondisclosure is appropriate.
  • ...6 more annotations...
  • Despite the use of the word “now” in that first sentence, however, the FBI has yet to do any such thing. It has not announced any such change, nor explained how it will implement it, or when. Media inquiries were greeted with stalling and, finally, a no comment — ostensibly on advice of legal counsel. “There is pending litigation that deals with a lot of the same questions you’re asking, out of the Ninth Circuit,” FBI spokesman Chris Allen told me. “So for now, we’ll just have to decline to comment.” FBI lawyers are working on a court filing for that case, and “it will address” the new policy, he said. He would not say when to expect it.
  • There is indeed a significant case currently before the federal appeals court in San Francisco. Oral arguments were in October. A decision could come any time. But in that case, the Electronic Frontier Foundation (EFF), which is representing two unnamed communications companies that received NSLs, is calling for the entire NSL statute to be thrown out as unconstitutional — not for a tweak to the gag. And it has a March 2013 district court ruling in its favor. “The gag is a prior restraint under the First Amendment, and prior restraints have to meet an extremely high burden,” said Andrew Crocker, a legal fellow at EFF. That means going to court and meeting the burden of proof — not just signing a letter. Or as the Cato Institute’s Julian Sanchez put it, “To have such a low bar for denying persons or companies the right to speak about government orders they have been served with is anathema. And it is not very good for accountability.”
  • In a separate case, a wide range of media companies (including First Look Media, the non-profit digital media venture that produces The Intercept) are supporting a lawsuit filed by Twitter, demanding the right to say specifically how many NSLs it has received. But simply releasing companies from a gag doesn’t assure the kind of accountability that privacy advocates are saying is required by the Constitution. “What the public has to remember is a NSL is asking for your information, but it’s not asking it from you,” said Michael German, a former FBI agent who is now a fellow with the Brennan Center for Justice. “The vast majority of these things go to the very large telecommunications and financial companies who have a large stake in maintaining a good relationship with the government because they’re heavily regulated entities.”
  • So, German said, “the number of NSLs that would be exposed as a result of the release of the gag order is probably very few. The person whose records are being obtained is the one who should receive some notification.” A time limit on gags going forward also raises the question of whether past gag orders will now be withdrawn. “Obviously there are at this point literally hundreds of thousands of National Security Letters that are more than three years old,” said Sanchez. Individual review is therefore unlikely, but there ought to be some recourse, he said. And the further back you go, “it becomes increasingly implausible that a significant percentage of those are going to entail some dire national security risk.” The NSL program has a troubled history. The absolute secrecy of the program and resulting lack of accountability led to systemic abuse as documented by repeated inspector-general investigations, including improperly authorized NSLs, factual misstatements in the NSLs, improper requests under NSL statutes, requests for information based on First Amendment protected activity, “after-the-fact” blanket NSLs to “cover” illegal requests, and hundreds of NSLs for “community of interest” or “calling circle” information without any determination that the telephone numbers were relevant to authorized national security investigations.
  • Obama’s own hand-selected “Review Group on Intelligence and Communications Technologies” recommended in December 2013 that NSLs should only be issued after judicial review — just like warrants — and that any gag should end within 180 days barring judicial re-approval. But FBI director James Comey objected to the idea, calling NSLs “a very important tool that is essential to the work we do.” His argument evidently prevailed with Obama.
  • NSLs have managed to stay largely under the American public’s radar. But, Crocker says, “pretty much every time I bring it up and give the thumbnail, people are shocked. Then you go into how many are issued every year, and they go crazy.” Want to send me your old NSL and see if we can set a new precedent? Here’s how to reach me. And here’s how to leak to me.
Paul Merrell

News from The Associated Press - 0 views

  • (AP) -- Federal regulators are urging consumers to go through their phone bills line by line after they accused T-Mobile US of wrongly charging customers for premium services, like horoscope texts and quirky ringtones, the customers never authorized. The Federal Trade Commission announced Tuesday that it is suing T-Mobile in a federal court in Seattle with the goal of making sure every unfairly charged customer sees a full refund. The lawsuit, the first of its kind against a mobile provider, is the result of months of stalled negotiations with T-Mobile, which says it is already offering refunds. "It's wrong for a company like T-Mobile to profit from scams against its customers when there were clear warning signs the charges it was imposing were fraudulent," FTC Chair Edith Ramirez in a statement.
  • The practice is called "cramming": A third party stuffs a customer's bill with bogus charges such as $10-per-month horoscopes or updates on celebrity gossip. In this case, the FTC said, T-Mobile was working with third-party vendors being investigated by regulators and known to be the subject of numerous customer complaints. T-Mobile then made it difficult for customers to notice the added charge to their bill and pocketed up to 40 percent of the total, according to the FTC.
  • The FTC told reporters in a conference call Tuesday that it had been in negotiations with T-Mobile for months in an attempt to guarantee refunds would be provided to customers but that the two sides couldn't reach an agreement. T-Mobile appears to have been laying the groundwork to head off the federal complaint. Last November, the company announced that it would no longer allow premium text services because they were waning in popularity and not all vendors had acted responsibly. In June, it announced it would reach out to consumers to provide refunds. But the FTC says that in many cases, the refunds are only partial and T-Mobile often refers customer complaints to the third-party vendors.
Paul Merrell

Guest Post: NSA Reform - The Consequences of Failure | Just Security - 0 views

  • In the absence of real reform, people and institutions at home and abroad are taking matters into their own hands. In America, the NSA’s overreach is changing the way we communicate with and relate to each other. In order to evade government surveillance, more and more Americans are employing encryption technology.  The veritable explosion of new secure messaging apps like Surespot, OpenWhisper’s collaboration with WhatsApp, the development and deployment of open source anti-surveillance tools like Detekt, the creation of organizationally-sponsored “surveillance self-defense” guides, the push to universalize the https protocol, anti-surveillance book events featuring free encryption workshops— are manifestations of the rise of the personal encryption and pro-privacy digital resistance movement. Its political implications are clear: Americans, along with people around the world, increasingly see the United States government’s overreaching surveillance activities as a threat to be blocked.
  • The federal government’s vacuum-cleaner approach to surveillance—manifested in Title II of the PATRIOT Act, the FISA Amendments Act, and EO 12333—has backfired in these respects, and the emergence of this digital resistance movement is one result. Indeed, the existence and proliferation of social networks hold the potential to help this movement spread faster and to more of the general public than would have been possible in decades past. This is evidenced by the growing concern worldwide about governments’ ability to access reams of information about people’s lives with relative ease. As one measure, compared to a year ago, 41% of online users in North America now avoid certain Internet sites and applications, 16% change who they communicate with, and 24% censor what they say online. Those numbers, if anywhere close to accurate, are a major concern for democratic society.
  • Even if commercially available privacy technology proves capable of providing a genuine shield against warrantless or otherwise illegal surveillance by the United States government, it will remain a treatment for the symptom, not a cure for the underlying legal and constitutional malady. In April 2014, a Harris poll of US adults showed that in response to the Snowden revelations, “Almost half of respondents (47%) said that they have changed their online behavior and think more carefully about where they go, what they say, and what they do online.” Set aside for a moment that just the federal government’s collection of the data of innocent Americans is itself likely a violation of the Fourth Amendment. The Harris poll is just one of numerous studies highlighting the collateral damage to American society and politics from NSA’s excesses: segments of our population are now fearful of even associating with individuals or organizations executive branch officials deem controversial or suspicious. Nearly half of Americans say they have changed their online behavior out of a fear of what the federal government might do with their personal information. The Constitution’s free association guarantee has been damaged by the Surveillance State’s very operation.
  • ...1 more annotation...
  • The failure of the Congress and the courts to end the surveillance state, despite the repeated efforts by a huge range of political and public interest actors to effect that change through the political process, is only fueling the growing resistance movement. Federal officials understand this, which is why they are trying—desperately and in the view of some, underhandedly—to shut down this digital resistance movement. This action/reaction cycle is exactly what it appears to be: an escalating conflict between the American public and its government. Without comprehensive surveillance authority reforms (including a journalist “shield law” and ironclad whistleblower protections for Intelligence Community contractors) that are verifiable and enforceable, that conflict will only continue.
Gonzalo San Gil, PhD.

The Linux desktop battle (and why it matters) - TechRepublic - 2 views

  •  
    Jack Wallen ponders the problem with the ever-lagging acceptance of the Linux desktop and poses a radical solution.
  •  
    "Jack Wallen ponders the problem with the ever-lagging acceptance of the Linux desktop and poses a radical solution. Linux desktop I have been using Ubuntu Unity for a very long time. In fact, I would say that this is, by far, the longest I've stuck with a single desktop interface. Period. That doesn't mean I don't stop to smell the desktop roses along the Linux path. In fact, I've often considered other desktops as a drop-in replacement for Unity. GNOME and Budgie have vied for my attention of late. Both are solid takes on the desktop that offer a minimalistic, modern look and feel (something I prefer) and help me get my work done with an efficiency other desktops can't match. What I see across the Linux landscape, however, often takes me by surprise. While Microsoft and Apple continue to push the idea of the user interface forward, a good amount of the Linux community seems bent on holding us in a perpetual state of "90s computing." Consider Xfce, Mate, and Cinnamon -- three very popular Linux desktop interfaces that work with one very common thread... not changing for the sake of change. Now, this can be considered a very admirable cause when it's put in place to ensure that user experience (UX) is as positive as possible. What this idea does, however, is deny the idea that change can affect an even more efficient and positive UX. When I spin up a distribution that makes use of Xfce, Mate, or Cinnamon, I find the environments work well and get the job done. At the same time, I feel as if the design of the desktops is trapped in the wrong era. At this point, you're certainly questioning the validity and path of this post. If the desktops work well and help you get the job done, what's wrong? It's all about perception. Let me offer you up a bit of perspective. The only reason Apple managed to rise from the ashes and become one of the single most powerful forces in technology is because they understood the concept of perception. They re-invented th
  •  
    Jack Wallen ponders the problem with the ever-lagging acceptance of the Linux desktop and poses a radical solution.
Paul Merrell

European Human Rights Court Deals a Heavy Blow to the Lawfulness of Bulk Surveillance |... - 0 views

  • In a seminal decision updating and consolidating its previous jurisprudence on surveillance, the Grand Chamber of the European Court of Human Rights took a sideways swing at mass surveillance programs last week, reiterating the centrality of “reasonable suspicion” to the authorization process and the need to ensure interception warrants are targeted to an individual or premises. The decision in Zakharov v. Russia — coming on the heels of the European Court of Justice’s strongly-worded condemnation in Schrems of interception systems that provide States with “generalised access” to the content of communications — is another blow to governments across Europe and the United States that continue to argue for the legitimacy and lawfulness of bulk collection programs. It also provoked the ire of the Russian government, prompting an immediate legislative move to give the Russian constitution precedence over Strasbourg judgments. The Grand Chamber’s judgment in Zakharov is especially notable because its subject matter — the Russian SORM system of interception, which includes the installation of equipment on telecommunications networks that subsequently enables the State direct access to the communications transiting through those networks — is similar in many ways to the interception systems currently enjoying public and judicial scrutiny in the United States, France, and the United Kingdom. Zakharov also provides a timely opportunity to compare the differences between UK and Russian law: Namely, Russian law requires prior independent authorization of interception measures, whereas neither the proposed UK law nor the existing legislative framework do.
  • The decision is lengthy and comprises a useful restatement and harmonization of the Court’s approach to standing (which it calls “victim status”) in surveillance cases, which is markedly different from that taken by the US Supreme Court. (Indeed, Judge Dedov’s separate but concurring opinion notes the contrast with Clapper v. Amnesty International.) It also addresses at length issues of supervision and oversight, as well as the role played by notification in ensuring the effectiveness of remedies. (Marko Milanovic discusses many of these issues here.) For the purpose of the ongoing debate around the legitimacy of bulk surveillance regimes under international human rights law, however, three particular conclusions of the Court are critical.
  • The Court took issue with legislation permitting the interception of communications for broad national, military, or economic security purposes (as well as for “ecological security” in the Russian case), absent any indication of the particular circumstances under which an individual’s communications may be intercepted. It said that such broadly worded statutes confer an “almost unlimited degree of discretion in determining which events or acts constitute such a threat and whether that threat is serious enough to justify secret surveillance” (para. 248). Such discretion cannot be unbounded. It can be limited through the requirement for prior judicial authorization of interception measures (para. 249). Non-judicial authorities may also be competent to authorize interception, provided they are sufficiently independent from the executive (para. 258). What is important, the Court said, is that the entity authorizing interception must be “capable of verifying the existence of a reasonable suspicion against the person concerned, in particular, whether there are factual indications for suspecting that person of planning, committing or having committed criminal acts or other acts that may give rise to secret surveillance measures, such as, for example, acts endangering national security” (para. 260). This finding clearly constitutes a significant threshold which a number of existing and pending European surveillance laws would not meet. For example, the existence of individualized reasonable suspicion runs contrary to the premise of signals intelligence programs where communications are intercepted in bulk; by definition, those programs collect information without any consideration of individualized suspicion. Yet the Court was clearly articulating the principle with national security-driven surveillance in mind, and with the knowledge that interception of communications in Russia is conducted by Russian intelligence on behalf of law enforcement agencies.
  • ...6 more annotations...
  • This element of the Grand Chamber’s decision distinguishes it from prior jurisprudence of the Court, namely the decisions of the Third Section in Weber and Saravia v. Germany (2006) and of the Fourth Section in Liberty and Ors v. United Kingdom (2008). In both cases, the Court considered legislative frameworks which enable bulk interception of communications. (In the German case, the Court used the term “strategic monitoring,” while it referred to “more general programmes of surveillance” in Liberty.) In the latter case, the Fourth Section sought to depart from earlier European Commission of Human Rights — the court of first instance until 1998 — decisions which developed the requirements of the law in the context of surveillance measures targeted at specific individuals or addresses. It took note of the Weber decision which “was itself concerned with generalized ‘strategic monitoring’, rather than the monitoring of individuals” and concluded that there was no “ground to apply different principles concerning the accessibility and clarity of the rules governing the interception of individual communications, on the one hand, and more general programmes of surveillance, on the other” (para. 63). The Court in Liberty made no mention of any need for any prior or reasonable suspicion at all.
  • In Weber, reasonable suspicion was addressed only at the post-interception stage; that is, under the German system, bulk intercepted data could be transmitted from the German Federal Intelligence Service (BND) to law enforcement authorities without any prior suspicion. The Court found that the transmission of personal data without any specific prior suspicion, “in order to allow the institution of criminal proceedings against those being monitored” constituted a fairly serious interference with individuals’ privacy rights that could only be remedied by safeguards and protections limiting the extent to which such data could be used (para. 125). (In the context of that case, the Court found that Germany’s protections and restrictions were sufficient.) When you compare the language from these three cases, it would appear that the Grand Chamber in Zakharov is reasserting the requirement for individualized reasonable suspicion, including in national security cases, with full knowledge of the nature of surveillance considered by the Court in its two recent bulk interception cases.
  • The requirement of reasonable suspicion is bolstered by the Grand Chamber’s subsequent finding in Zakharov that the interception authorization (e.g., the court order or warrant) “must clearly identify a specific person to be placed under surveillance or a single set of premises as the premises in respect of which the authorisation is ordered. Such identification may be made by names, addresses, telephone numbers or other relevant information” (para. 264). In making this finding, it references paragraphs from Liberty describing the broad nature of the bulk interception warrants under British law. In that case, it was this description that led the Court to find the British legislation possessed insufficient clarity on the scope or manner of exercise of the State’s discretion to intercept communications. In one sense, therefore, the Grand Chamber seems to be retroactively annotating the Fourth Section’s Liberty decision so that it might become consistent with its decision in Zakharov. Without this revision, the Court would otherwise appear to depart to some extent — arguably, purposefully — from both Liberty and Weber.
  • Finally, the Grand Chamber took issue with the direct nature of the access enjoyed by Russian intelligence under the SORM system. The Court noted that this contributed to rendering oversight ineffective, despite the existence of a requirement for prior judicial authorization. Absent an obligation to demonstrate such prior authorization to the communications service provider, the likelihood that the system would be abused through “improper action by a dishonest, negligent or overly zealous official” was quite high (para. 270). Accordingly, “the requirement to show an interception authorisation to the communications service provider before obtaining access to a person’s communications is one of the important safeguards against abuse by the law-enforcement authorities” (para. 269). Again, this requirement arguably creates an unconquerable barrier for a number of modern bulk interception systems, which rely on the use of broad warrants to authorize the installation of, for example, fiber optic cable taps that facilitate the interception of all communications that cross those cables. In the United Kingdom, the Independent Reviewer of Terrorism Legislation David Anderson revealed in his essential inquiry into British surveillance in 2015, there are only 20 such warrants in existence at any time. Even if these 20 warrants are served on the relevant communications service providers upon the installation of cable taps, the nature of bulk interception deprives this of any genuine meaning, making the safeguard an empty one. Once a tap is installed for the purposes of bulk interception, the provider is cut out of the equation and can no longer play the role the Court found so crucial in Zakharov.
  • The Zakharov case not only levels a serious blow at bulk, untargeted surveillance regimes, it suggests the Grand Chamber’s intention to actively craft European Court of Human Rights jurisprudence in a manner that curtails such regimes. Any suggestion that the Grand Chamber’s decision was issued in ignorance of the technical capabilities or intentions of States and the continued preference for bulk interception systems should be dispelled; the oral argument in the case took place in September 2014, at a time when the Court had already indicated its intention to accord priority to cases arising out of the Snowden revelations. Indeed, the Court referenced such forthcoming cases in the fact sheet it issued after the Zakharov judgment was released. Any remaining doubt is eradicated through an inspection of the multiple references to the Snowden revelations in the judgment itself. In the main judgment, the Court excerpted text from the Director of the European Union Agency for Human Rights discussing Snowden, and in the separate opinion issued by Judge Dedov, he goes so far as to quote Edward Snowden: “With each court victory, with every change in the law, we demonstrate facts are more convincing than fear. As a society, we rediscover that the value of the right is not in what it hides, but in what it protects.”
  • The full implications of the Zakharov decision remain to be seen. However, it is likely we will not have to wait long to know whether the Grand Chamber intends to see the demise of bulk collection schemes; the three UK cases (Big Brother Watch & Ors v. United Kingdom, Bureau of Investigative Journalism & Alice Ross v. United Kingdom, and 10 Human Rights Organisations v. United Kingdom) pending before the Court have been fast-tracked, indicating the Court’s willingness to continue to confront the compliance of bulk collection schemes with human rights law. It is my hope that the approach in Zakharov hints at the Court’s conviction that bulk collection schemes lie beyond the bounds of permissible State surveillance.
Paul Merrell

Section 215 and "Fruitless" (?!?) Constitutional Adjudication | Just Security - 0 views

  • This morning, the Second Circuit issued a follow-on ruling to its May decision in ACLU v. Clapper (which had held that the NSA’s bulk telephone records program was unlawful insofar as it had not properly been authorized by Congress). In a nutshell, today’s ruling rejects the ACLU’s request for an injunction against the continued operation of the program for the duration of the 180-day transitional period (which ends on November 29) from the old program to the quite different collection regime authorized by the USA Freedom Act. As the Second Circuit (in my view, quite correctly) concluded, “Regardless of whether the bulk telephone metadata program was illegal prior to May, as we have held, and whether it would be illegal after November 29, as Congress has now explicitly provided, it is clear that Congress intended to authorize it during the transitionary period.” So far, so good. But remember that the ACLU’s challenge to bulk collection was mounted on both statutory and constitutional grounds, the latter of which the Second Circuit was able to avoid in its earlier ruling because of its conclusion that, prior to the enactment of the USA Freedom Act, bulk collection was unauthorized by Congress. Now that it has held that it is authorized during the transitional period, that therefore tees up, quite unavoidably, whether bulk collection violates the Fourth Amendment. But rather than decide that (momentous) question, the Second Circuit ducked:
  • We agree with the government that we ought not meddle with Congress’s considered decision regarding the transition away from bulk telephone metadata collection, and also find that addressing these issues at this time would not be a prudent use of judicial authority. We need not, and should not, decide such momentous constitutional issues based on a request for such narrow and temporary relief. To do so would take more time than the brief transition period remaining for the telephone metadata program, at which point, any ruling on the constitutionality of the demised program would be fruitless. In other words, because any constitutional violation is short-lived, and because it results from the “considered decision” of Congress, it would be fruitless to actually resolve the constitutionality of bulk collection during the transitional period.
  • Hopefully, it won’t take a lot of convincing for folks to understand just how wrong-headed this is. For starters, if the plaintiffs are correct, they are currently being subjected to unconstitutional government surveillance for which they are entitled to a remedy. The fact that this surveillance has a limited shelf-life (and/or that Congress was complicit in it) doesn’t in any way ameliorate the constitutional violation — which is exactly why the Supreme Court has, for generations, recognized an exception to mootness doctrine for constitutional violations that, owing to their short duration, are “capable of repetition, yet evading review.” Indeed, in this very same opinion, the Second Circuit first held that the ACLU’s challenge isn’t moot, only to then invokes mootness-like principles to justify not resolving the constitutional claim. It can’t be both; either the constitutional challenge is moot, or it isn’t. But more generally, the notion that constitutional adjudication of a claim with a short shelf-life is “fruitless” utterly misses the significance of the establishment of forward-looking judicial precedent, especially in a day and age in which courts are allowed to (and routinely do) avoid resolving the merits of constitutional claims in cases in which the relevant precedent is not “clearly established.” Maybe, if this were the kind of constitutional question that was unlikely to recur, there’d be more to the Second Circuit’s avoidance of the issue in this case. But whether and to what extent the Fourth Amendment applies to information we voluntarily provide to third parties is hardly that kind of question, and the Second Circuit’s unconvincing refusal to answer that question in a context in which it is quite squarely presented is nothing short of feckless.
« First ‹ Previous 41 - 60 of 67 Next ›
Showing 20 items per page