Skip to main content

Home/ Future of the Web/ Group items matching "guide,a" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

Organize a Giving Guide Giveaway - Free Software Foundation - December 1, 2014 - 0 views

  •  
    "by Free Software Foundation - Published on Nov 17, 2014 04:18 PM Organize an event to help people choose electronics gifts that actually give more than they take. In the flurry of holiday advertising that happens at the end of the year, many people are swept into buying freedom-denying and DRM-laden gifts that take more than they give. Each holiday season the FSF releases a Giving Guide to make it easy for you to choose tech gifts that respect your rights as a computer user and avoid those that don't. We'll be launching 2014's guide on Black Friday (November 28th), full of gifts that are fun and free, made by companies that share your values. It will be similar to 2013's Giving Guide, but more extensive and spruced up with a new design. It'll even have discounts on some of our favorite items, and translations into multiple languages."
  •  
    "by Free Software Foundation - Published on Nov 17, 2014 04:18 PM Organize an event to help people choose electronics gifts that actually give more than they take. In the flurry of holiday advertising that happens at the end of the year, many people are swept into buying freedom-denying and DRM-laden gifts that take more than they give. Each holiday season the FSF releases a Giving Guide to make it easy for you to choose tech gifts that respect your rights as a computer user and avoid those that don't. We'll be launching 2014's guide on Black Friday (November 28th), full of gifts that are fun and free, made by companies that share your values. It will be similar to 2013's Giving Guide, but more extensive and spruced up with a new design. It'll even have discounts on some of our favorite items, and translations into multiple languages."
Gonzalo San Gil, PhD.

Guide to DRM-Free Living | Defective by Design - 0 views

  •  
    "Welcome to the guide to living DRM-free. Please submit corrections and new items for the guide by adding it to the LibrePlanet wiki (you will need to register and login first) or emailing us at info@defectivebydesign.org. If you are involved with a DRM-free media project, we encourage you to use the DRM-free logo and link to this site. This guide lists any suppliers of digital media provide files free of DRM and do not require the use of proprietary software. Suppliers that have some DRM-free media or DRM-free options will be accepted if they differentiate between files which are DRM-free and those that are not. Certain suppliers may promote non-free software, but we will include warnings and instructions on how to avoid the software. Y"
  •  
    "Welcome to the guide to living DRM-free. Please submit corrections and new items for the guide by adding it to the LibrePlanet wiki (you will need to register and login first) or emailing us at info@defectivebydesign.org. If you are involved with a DRM-free media project, we encourage you to use the DRM-free logo and link to this site. This guide lists any suppliers of digital media provide files free of DRM and do not require the use of proprietary software. Suppliers that have some DRM-free media or DRM-free options will be accepted if they differentiate between files which are DRM-free and those that are not. Certain suppliers may promote non-free software, but we will include warnings and instructions on how to avoid the software. Y"
Paul Merrell

Popular Security Software Came Under Relentless NSA and GCHQ Attacks - The Intercept - 0 views

  • The National Security Agency and its British counterpart, Government Communications Headquarters, have worked to subvert anti-virus and other security software in order to track users and infiltrate networks, according to documents from NSA whistleblower Edward Snowden. The spy agencies have reverse engineered software products, sometimes under questionable legal authority, and monitored web and email traffic in order to discreetly thwart anti-virus software and obtain intelligence from companies about security software and users of such software. One security software maker repeatedly singled out in the documents is Moscow-based Kaspersky Lab, which has a holding registered in the U.K., claims more than 270,000 corporate clients, and says it protects more than 400 million people with its products. British spies aimed to thwart Kaspersky software in part through a technique known as software reverse engineering, or SRE, according to a top-secret warrant renewal request. The NSA has also studied Kaspersky Lab’s software for weaknesses, obtaining sensitive customer information by monitoring communications between the software and Kaspersky servers, according to a draft top-secret report. The U.S. spy agency also appears to have examined emails inbound to security software companies flagging new viruses and vulnerabilities.
  • The efforts to compromise security software were of particular importance because such software is relied upon to defend against an array of digital threats and is typically more trusted by the operating system than other applications, running with elevated privileges that allow more vectors for surveillance and attack. Spy agencies seem to be engaged in a digital game of cat and mouse with anti-virus software companies; the U.S. and U.K. have aggressively probed for weaknesses in software deployed by the companies, which have themselves exposed sophisticated state-sponsored malware.
  • The requested warrant, provided under Section 5 of the U.K.’s 1994 Intelligence Services Act, must be renewed by a government minister every six months. The document published today is a renewal request for a warrant valid from July 7, 2008 until January 7, 2009. The request seeks authorization for GCHQ activities that “involve modifying commercially available software to enable interception, decryption and other related tasks, or ‘reverse engineering’ software.”
  • ...9 more annotations...
  • The NSA, like GCHQ, has studied Kaspersky Lab’s software for weaknesses. In 2008, an NSA research team discovered that Kaspersky software was transmitting sensitive user information back to the company’s servers, which could easily be intercepted and employed to track users, according to a draft of a top-secret report. The information was embedded in “User-Agent” strings included in the headers of Hypertext Transfer Protocol, or HTTP, requests. Such headers are typically sent at the beginning of a web request to identify the type of software and computer issuing the request.
  • According to the draft report, NSA researchers found that the strings could be used to uniquely identify the computing devices belonging to Kaspersky customers. They determined that “Kaspersky User-Agent strings contain encoded versions of the Kaspersky serial numbers and that part of the User-Agent string can be used as a machine identifier.” They also noted that the “User-Agent” strings may contain “information about services contracted for or configurations.” Such data could be used to passively track a computer to determine if a target is running Kaspersky software and thus potentially susceptible to a particular attack without risking detection.
  • Another way the NSA targets foreign anti-virus companies appears to be to monitor their email traffic for reports of new vulnerabilities and malware. A 2010 presentation on “Project CAMBERDADA” shows the content of an email flagging a malware file, which was sent to various anti-virus companies by François Picard of the Montréal-based consulting and web hosting company NewRoma. The presentation of the email suggests that the NSA is reading such messages to discover new flaws in anti-virus software. Picard, contacted by The Intercept, was unaware his email had fallen into the hands of the NSA. He said that he regularly sends out notification of new viruses and malware to anti-virus companies, and that he likely sent the email in question to at least two dozen such outfits. He also said he never sends such notifications to government agencies. “It is strange the NSA would show an email like mine in a presentation,” he added.
  • The NSA presentation goes on to state that its signals intelligence yields about 10 new “potentially malicious files per day for malware triage.” This is a tiny fraction of the hostile software that is processed. Kaspersky says it detects 325,000 new malicious files every day, and an internal GCHQ document indicates that its own system “collect[s] around 100,000,000 malware events per day.” After obtaining the files, the NSA analysts “[c]heck Kaspersky AV to see if they continue to let any of these virus files through their Anti-Virus product.” The NSA’s Tailored Access Operations unit “can repurpose the malware,” presumably before the anti-virus software has been updated to defend against the threat.
  • The Project CAMBERDADA presentation lists 23 additional AV companies from all over the world under “More Targets!” Those companies include Check Point software, a pioneering maker of corporate firewalls based Israel, whose government is a U.S. ally. Notably omitted are the American anti-virus brands McAfee and Symantec and the British company Sophos.
  • As government spies have sought to evade anti-virus software, the anti-virus firms themselves have exposed malware created by government spies. Among them, Kaspersky appears to be the sharpest thorn in the side of government hackers. In the past few years, the company has proven to be a prolific hunter of state-sponsored malware, playing a role in the discovery and/or analysis of various pieces of malware reportedly linked to government hackers, including the superviruses Flame, which Kaspersky flagged in 2012; Gauss, also detected in 2012; Stuxnet, discovered by another company in 2010; and Regin, revealed by Symantec. In February, the Russian firm announced its biggest find yet: the “Equation Group,” an organization that has deployed espionage tools widely believed to have been created by the NSA and hidden on hard drives from leading brands, according to Kaspersky. In a report, the company called it “the most advanced threat actor we have seen” and “probably one of the most sophisticated cyber attack groups in the world.”
  • Hacks deployed by the Equation Group operated undetected for as long as 14 to 19 years, burrowing into the hard drive firmware of sensitive computer systems around the world, according to Kaspersky. Governments, militaries, technology companies, nuclear research centers, media outlets and financial institutions in 30 countries were among those reportedly infected. Kaspersky estimates that the Equation Group could have implants in tens of thousands of computers, but documents published last year by The Intercept suggest the NSA was scaling up their implant capabilities to potentially infect millions of computers with malware. Kaspersky’s adversarial relationship with Western intelligence services is sometimes framed in more sinister terms; the firm has been accused of working too closely with the Russian intelligence service FSB. That accusation is partly due to the company’s apparent success in uncovering NSA malware, and partly due to the fact that its founder, Eugene Kaspersky, was educated by a KGB-backed school in the 1980s before working for the Russian military.
  • Kaspersky has repeatedly denied the insinuations and accusations. In a recent blog post, responding to a Bloomberg article, he complained that his company was being subjected to “sensationalist … conspiracy theories,” sarcastically noting that “for some reason they forgot our reports” on an array of malware that trace back to Russian developers. He continued, “It’s very hard for a company with Russian roots to become successful in the U.S., European and other markets. Nobody trusts us — by default.”
  • Documents published with this article: Kaspersky User-Agent Strings — NSA Project CAMBERDADA — NSA NDIST — GCHQ’s Developing Cyber Defence Mission GCHQ Application for Renewal of Warrant GPW/1160 Software Reverse Engineering — GCHQ Reverse Engineering — GCHQ Wiki Malware Analysis & Reverse Engineering — ACNO Skill Levels — GCHQ
Gonzalo San Gil, PhD.

Guide: Anonymity and Privacy for Advanced Linux Users - Deep Dot Web - 1 views

  •  
    "All Credits go to beac0n, thanks for contacting us and contributing the guide you created! As people requested - here is a link to download this guide as a PDF. Intro The goal is to bring together enough information in one document for a beginner to get started. Visiting countless sites, and combing the internet for information can make it obvious your desire to obtain anonymity, and lead to errors, due to conflicting information. Every effort has been made to make this document accurate. This guide is image heavy so it may take some time to load via Tor."
Paul Merrell

Help:CirrusSearch - MediaWiki - 0 views

  • CirrusSearch is a new search engine for MediaWiki. The Wikimedia Foundation is migrating to CirrusSearch since it features key improvements over the previously used search engine, LuceneSearch. This page describes the features that are new or different compared to the past solutions.
  • 1 Frequently asked questions 1.1 What's improved? 2 Updates 3 Search suggestions 4 Full text search 4.1 Stemming 4.2 Filters (intitle:, incategory: and linksto:) 4.3 prefix: 4.4 Special prefixes 4.5 Did you mean 4.6 Prefer phrase matches 4.7 Fuzzy search 4.8 Phrase search and proximity 4.9 Quotes and exact matches 4.10 prefer-recent: 4.11 hastemplate: 4.12 boost-templates: 4.13 insource: 4.14 Auxiliary Text 4.15 Lead Text 4.16 Commons Search 5 See also
  • Stemming In search terminology, support for "stemming" means that a search for "swim" will also include "swimming" and "swimmed", but not "swam". There is support for dozens of languages, but all languages are wanted. There is a list of currently supported languages at elasticsearch.org; see their documentation on contributing to submit requests or patches.
  • ...1 more annotation...
  • See also Full specifications in the browser tests
  •  
    Lots of new tricks to learn on sites using MediaWiki as folks update their installations, I'm not a big fan of programs written in PHP and Javascript, but they're impossible to avoid on the Web. So is MediaWiki, so any real improvements help.  
Paul Merrell

Edward Snowden Explains How To Reclaim Your Privacy - 0 views

  • Micah Lee: What are some operational security practices you think everyone should adopt? Just useful stuff for average people. Edward Snowden: [Opsec] is important even if you’re not worried about the NSA. Because when you think about who the victims of surveillance are, on a day-to-day basis, you’re thinking about people who are in abusive spousal relationships, you’re thinking about people who are concerned about stalkers, you’re thinking about children who are concerned about their parents overhearing things. It’s to reclaim a level of privacy. The first step that anyone could take is to encrypt their phone calls and their text messages. You can do that through the smartphone app Signal, by Open Whisper Systems. It’s free, and you can just download it immediately. And anybody you’re talking to now, their communications, if it’s intercepted, can’t be read by adversaries. [Signal is available for iOS and Android, and, unlike a lot of security tools, is very easy to use.] You should encrypt your hard disk, so that if your computer is stolen the information isn’t obtainable to an adversary — pictures, where you live, where you work, where your kids are, where you go to school. [I’ve written a guide to encrypting your disk on Windows, Mac, and Linux.] Use a password manager. One of the main things that gets people’s private information exposed, not necessarily to the most powerful adversaries, but to the most common ones, are data dumps. Your credentials may be revealed because some service you stopped using in 2007 gets hacked, and your password that you were using for that one site also works for your Gmail account. A password manager allows you to create unique passwords for every site that are unbreakable, but you don’t have the burden of memorizing them. [The password manager KeePassX is free, open source, cross-platform, and never stores anything in the cloud.]
  • The other thing there is two-factor authentication. The value of this is if someone does steal your password, or it’s left or exposed somewhere … [two-factor authentication] allows the provider to send you a secondary means of authentication — a text message or something like that. [If you enable two-factor authentication, an attacker needs both your password as the first factor and a physical device, like your phone, as your second factor, to login to your account. Gmail, Facebook, Twitter, Dropbox, GitHub, Battle.net, and tons of other services all support two-factor authentication.]
  • We should armor ourselves using systems we can rely on every day. This doesn’t need to be an extraordinary lifestyle change. It doesn’t have to be something that is disruptive. It should be invisible, it should be atmospheric, it should be something that happens painlessly, effortlessly. This is why I like apps like Signal, because they’re low friction. It doesn’t require you to re-order your life. It doesn’t require you to change your method of communications. You can use it right now to talk to your friends.
  • ...4 more annotations...
  • Lee: What do you think about Tor? Do you think that everyone should be familiar with it, or do you think that it’s only a use-it-if-you-need-it thing? Snowden: I think Tor is the most important privacy-enhancing technology project being used today. I use Tor personally all the time. We know it works from at least one anecdotal case that’s fairly familiar to most people at this point. That’s not to say that Tor is bulletproof. What Tor does is it provides a measure of security and allows you to disassociate your physical location. … But the basic idea, the concept of Tor that is so valuable, is that it’s run by volunteers. Anyone can create a new node on the network, whether it’s an entry node, a middle router, or an exit point, on the basis of their willingness to accept some risk. The voluntary nature of this network means that it is survivable, it’s resistant, it’s flexible. [Tor Browser is a great way to selectively use Tor to look something up and not leave a trace that you did it. It can also help bypass censorship when you’re on a network where certain sites are blocked. If you want to get more involved, you can volunteer to run your own Tor node, as I do, and support the diversity of the Tor network.]
  • Lee: So that is all stuff that everybody should be doing. What about people who have exceptional threat models, like future intelligence-community whistleblowers, and other people who have nation-state adversaries? Maybe journalists, in some cases, or activists, or people like that? Snowden: So the first answer is that you can’t learn this from a single article. The needs of every individual in a high-risk environment are different. And the capabilities of the adversary are constantly improving. The tooling changes as well. What really matters is to be conscious of the principles of compromise. How can the adversary, in general, gain access to information that is sensitive to you? What kinds of things do you need to protect? Because of course you don’t need to hide everything from the adversary. You don’t need to live a paranoid life, off the grid, in hiding, in the woods in Montana. What we do need to protect are the facts of our activities, our beliefs, and our lives that could be used against us in manners that are contrary to our interests. So when we think about this for whistleblowers, for example, if you witnessed some kind of wrongdoing and you need to reveal this information, and you believe there are people that want to interfere with that, you need to think about how to compartmentalize that.
  • Tell no one who doesn’t need to know. [Lindsay Mills, Snowden’s girlfriend of several years, didn’t know that he had been collecting documents to leak to journalists until she heard about it on the news, like everyone else.] When we talk about whistleblowers and what to do, you want to think about tools for protecting your identity, protecting the existence of the relationship from any type of conventional communication system. You want to use something like SecureDrop, over the Tor network, so there is no connection between the computer that you are using at the time — preferably with a non-persistent operating system like Tails, so you’ve left no forensic trace on the machine you’re using, which hopefully is a disposable machine that you can get rid of afterward, that can’t be found in a raid, that can’t be analyzed or anything like that — so that the only outcome of your operational activities are the stories reported by the journalists. [SecureDrop is a whistleblower submission system. Here is a guide to using The Intercept’s SecureDrop server as safely as possible.]
  • And this is to be sure that whoever has been engaging in this wrongdoing cannot distract from the controversy by pointing to your physical identity. Instead they have to deal with the facts of the controversy rather than the actors that are involved in it. Lee: What about for people who are, like, in a repressive regime and are trying to … Snowden: Use Tor. Lee: Use Tor? Snowden: If you’re not using Tor you’re doing it wrong. Now, there is a counterpoint here where the use of privacy-enhancing technologies in certain areas can actually single you out for additional surveillance through the exercise of repressive measures. This is why it’s so critical for developers who are working on security-enhancing tools to not make their protocols stand out.
  •  
    Lots more in the interview that I didn't highlight. This is a must-read.
Paul Merrell

Leaked docs show spyware used to snoop on US computers | Ars Technica - 0 views

  • Software created by the controversial UK-based Gamma Group International was used to spy on computers that appear to be located in the United States, the UK, Germany, Russia, Iran, and Bahrain, according to a leaked trove of documents analyzed by ProPublica. It's not clear whether the surveillance was conducted by governments or private entities. Customer e-mail addresses in the collection appeared to belong to a German surveillance company, an independent consultant in Dubai, the Bosnian and Hungarian Intelligence services, a Dutch law enforcement officer, and the Qatari government.
  • The leaked files—which were posted online by hackers—are the latest in a series of revelations about how state actors including repressive regimes have used Gamma's software to spy on dissidents, journalists, and activist groups. The documents, leaked last Saturday, could not be readily verified, but experts told ProPublica they believed them to be genuine. "I think it's highly unlikely that it's a fake," said Morgan Marquis-Bore, a security researcher who while at The Citizen Lab at the University of Toronto had analyzed Gamma Group's software and who authored an article about the leak on Thursday. The documents confirm many details that have already been reported about Gamma, such as that its tools were used to spy on Bahraini activists. Some documents in the trove contain metadata tied to e-mail addresses of several Gamma employees. Bill Marczak, another Gamma Group expert at the Citizen Lab, said that several dates in the documents correspond to publicly known events—such as the day that a particular Bahraini activist was hacked.
  • The leaked files contain more than 40 gigabytes of confidential technical material, including software code, internal memos, strategy reports, and user guides on how to use Gamma Group software suite called FinFisher. FinFisher enables customers to monitor secure Web traffic, Skype calls, webcams, and personal files. It is installed as malware on targets' computers and cell phones. A price list included in the trove lists a license of the software at almost $4 million. The documents reveal that Gamma uses technology from a French company called Vupen Security that sells so-called computer "exploits." Exploits include techniques called "zero days" for "popular software like Microsoft Office, Internet Explorer, Adobe Acrobat Reader, and many more." Zero days are exploits that have not yet been detected by the software maker and therefore are not blocked.
  • ...2 more annotations...
  • Many of Gamma's product brochures have previously been published by the Wall Street Journal and Wikileaks, but the latest trove shows how the products are getting more sophisticated. In one document, engineers at Gamma tested a product called FinSpy, which inserts malware onto a user's machine, and found that it could not be blocked by most antivirus software. Documents also reveal that Gamma had been working to bypass encryption tools including a mobile phone encryption app, Silent Circle, and were able to bypass the protection given by hard-drive encryption products TrueCrypt and Microsoft's Bitlocker.
  • The documents also describe a "country-wide" surveillance product called FinFly ISP which promises customers the ability to intercept Internet traffic and masquerade as ordinary websites in order to install malware on a target's computer. The most recent date-stamp found in the documents is August 2, coincidung with the first tweet by a parody Twitter account, @GammaGroupPR, which first announced the hack and may be run by the hacker or hackers responsible for the leak. On Reddit, a user called PhineasFisher claimed responsibility for the leak. "Two years ago their software was found being widely used by governments in the middle east, especially Bahrain, to hack and spy on the computers and phones of journalists and dissidents," the user wrote. The name on the @GammaGroupPR Twitter account is also "Phineas Fisher." GammaGroup, the surveillance company whose documents were released, is no stranger to the spotlight. The security firm F-Secure first reported the purchase of FinFisher software by the Egyptian State Security agency in 2011. In 2012, Bloomberg News and The Citizen Lab showed how the company's malware was used to target activists in Bahrain. In 2013, the software company Mozilla sent a cease-and-desist letter to the company after a report by The Citizen Lab showed that a spyware-infected version of the Firefox browser manufactured by Gamma was being used to spy on Malaysian activists.
Gonzalo San Gil, PhD.

Best of 2013: Guides and tutorials from Opensource.com | opensource.com - 0 views

  •  
    "This year at Opensource.com, we challenged our contributors to give us the best and most useful guides, how-tos, and tutorials they could produce from their experiences and work in various open source industries and sectors. In this Best of Opensource.com, our top guides and tutorials this year fell within the four buckets you see below. If you can answer YES to any of the following questions, there's an open source way guide here for you! Do you... Participate in open source projects or communities? Want to learn more about open source? Want to write a book? Create a video? Teach? Work with students?"
Paul Merrell

Do Not Track Implementation Guide Launched | Electronic Frontier Foundation - 1 views

  • Today we are releasing the implementation guide for EFF’s Do Not Track (DNT) policy. For years users have been able to set a Do Not Track signal in their browser, but there has been little guidance for websites as to how to honor that request. EFF’s DNT policy sets out a meaningful response for servers to follow, and this guide provides details about how to apply it in practice. At its core, DNT protects user privacy by excluding the use of unique identifiers for cross-site tracking, and by limiting the retention period of log data to ten days. This short retention period gives sites the time they need for debugging and security purposes, and to generate aggregate statistical data. From this baseline, the policy then allows exceptions when the user's interactions with the site—e.g., to post comments, make a purchase, or click on an ad—necessitates collecting more information. The site is then free to retain any data necessary to complete the transaction. We believe this approach balances users’ privacy expectations with the ability of websites to deliver the functionality users want. Websites often integrate third-party content and rely on third-party services (like content delivery networks or analytics), and this creates the potential for user data to be leaked despite the best intentions of the site operator. The guide identifies potential pitfalls and catalogs providers of compliant services. It is common, for example, to embed media from platforms like You Tube, Sound Cloud, and Twitter, all of which track users whenever their widgets are loaded. Fortunately, Embedly, which offers control over the appearance of embeds, also supports DNT via its API, displaying a poster instead and loading the widget only if the user clicks on it knowingly.
  • Knowledge makes the difference between willing tracking and non-consensual tracking. Users should be able to choose whether they want to give up their privacy in exchange for using a site or a  particular feature. This means sites need to be transparent about their practices. A great example of this is our biggest adopter, Medium, which does not track DNT users who browse the site and gives clear information about tracking to users when they choose to log in. This is their previous log-in panel, the DNT language is currently being added to their new interface.
Gonzalo San Gil, PhD.

Advocacy 2.0 Guide: Tools for Digital Advocacy - Global Voices Advocacy - 0 views

  •  
    [The Advocacy 2.0 Guide (Tools for Digital Advocacy) describes some of the best techniques and tools that digital activists - and others who wish to learn from this subject - can use as part of their online advocacy campaigns. While our previous guide (Blog for a Cause!) focused on the effective use of blogs as an advocacy tool, this guide will explore creative uses of other web 2.0 applications. ...]
Paul Merrell

The best way to read Glenn Greenwald's 'No Place to Hide' - 0 views

  • Journalist Glenn Greenwald just dropped a pile of new secret National Security Agency documents onto the Internet. But this isn’t just some haphazard WikiLeaks-style dump. These documents, leaked to Greenwald last year by former NSA contractor Edward Snowden, are key supplemental reading material for his new book, No Place to Hide, which went on sale Tuesday. Now, you could just go buy the book in hardcover and read it like you would any other nonfiction tome. Thanks to all the additional source material, however, if any work should be read on an e-reader or computer, this is it. Here are all the links and instructions for getting the most out of No Place to Hide.
  • Greenwald has released two versions of the accompanying NSA docs: a compressed version and an uncompressed version. The only difference between these two is the quality of the PDFs. The uncompressed version clocks in at over 91MB, while the compressed version is just under 13MB. For simple reading purposes, just go with the compressed version and save yourself some storage space. Greenwald also released additional “notes” for the book, which are just citations. Unless you’re doing some scholarly research, you can skip this download.
  • No Place to Hide is, of course, available on a wide variety of ebook formats—all of which are a few dollars cheaper than the hardcover version, I might add. Pick your e-poison: Amazon, Nook, Kobo, iBooks. Flipping back and forth Each page of the documents includes a corresponding page number for the book, to allow readers to easily flip between the book text and the supporting documents. If you use the Amazon Kindle version, you also have the option of reading Greenwald’s book directly on your computer using the Kindle for PC app or directly in your browser. Yes, that may be the worst way to read a book. In this case, however, it may be the easiest way to flip back and forth between the book text and the notes and supporting documents. Of course, you can do the same on your e-reader—though it can be a bit of a pain. Those of you who own a tablet are in luck, as they provide the best way to read both ebooks and PDF files. Simply download the book using the e-reader app of your choice, download the PDFs from Greenwald’s website, and dig in. If you own a Kindle, Nook, or other ereader, you may have to convert the PDFs into a format that works well with your device. The Internet is full of tools and how-to guides for how to do this. Here’s one:
  • ...1 more annotation...
  • Kindle users also have the option of using Amazon’s Whispernet service, which converts PDFs into a format that functions best on the company’s e-reader. That will cost you a small fee, however—$0.15 per megabyte, which means the compressed Greenwald docs will cost you a whopping $1.95.
Paul Merrell

Most Agencies Falling Short on Mandate for Online Records - 1 views

  • Nearly 20 years after Congress passed the Electronic Freedom of Information Act Amendments (E-FOIA), only 40 percent of agencies have followed the law's instruction for systematic posting of records released through FOIA in their electronic reading rooms, according to a new FOIA Audit released today by the National Security Archive at www.nsarchive.org to mark Sunshine Week. The Archive team audited all federal agencies with Chief FOIA Officers as well as agency components that handle more than 500 FOIA requests a year — 165 federal offices in all — and found only 67 with online libraries populated with significant numbers of released FOIA documents and regularly updated.
  • Congress called on agencies to embrace disclosure and the digital era nearly two decades ago, with the passage of the 1996 "E-FOIA" amendments. The law mandated that agencies post key sets of records online, provide citizens with detailed guidance on making FOIA requests, and use new information technology to post online proactively records of significant public interest, including those already processed in response to FOIA requests and "likely to become the subject of subsequent requests." Congress believed then, and openness advocates know now, that this kind of proactive disclosure, publishing online the results of FOIA requests as well as agency records that might be requested in the future, is the only tenable solution to FOIA backlogs and delays. Thus the National Security Archive chose to focus on the e-reading rooms of agencies in its latest audit. Even though the majority of federal agencies have not yet embraced proactive disclosure of their FOIA releases, the Archive E-FOIA Audit did find that some real "E-Stars" exist within the federal government, serving as examples to lagging agencies that technology can be harnessed to create state-of-the art FOIA platforms. Unfortunately, our audit also found "E-Delinquents" whose abysmal web performance recalls the teletype era.
  • E-Delinquents include the Office of Science and Technology Policy at the White House, which, despite being mandated to advise the President on technology policy, does not embrace 21st century practices by posting any frequently requested records online. Another E-Delinquent, the Drug Enforcement Administration, insults its website's viewers by claiming that it "does not maintain records appropriate for FOIA Library at this time."
  • ...9 more annotations...
  • THE E-DELINQUENTS: WORST OVERALL AGENCIES In alphabetical order
  • The federal government has made some progress moving into the digital era. The National Security Archive's last E-FOIA Audit in 2007, " File Not Found," reported that only one in five federal agencies had put online all of the specific requirements mentioned in the E-FOIA amendments, such as guidance on making requests, contact information, and processing regulations. The new E-FOIA Audit finds the number of agencies that have checked those boxes is now much higher — 100 out of 165 — though many (66 in 165) have posted just the bare minimum, especially when posting FOIA responses. An additional 33 agencies even now do not post these types of records at all, clearly thwarting the law's intent.
  • The FOIAonline Members (Department of Commerce, Environmental Protection Agency, Federal Labor Relations Authority, Merit Systems Protection Board, National Archives and Records Administration, Pension Benefit Guaranty Corporation, Department of the Navy, General Services Administration, Small Business Administration, U.S. Citizenship and Immigration Services, and Federal Communications Commission) won their "E-Star" by making past requests and releases searchable via FOIAonline. FOIAonline also allows users to submit their FOIA requests digitally.
  • "The presumption of openness requires the presumption of posting," said Archive director Tom Blanton. "For the new generation, if it's not online, it does not exist." The National Security Archive has conducted fourteen FOIA Audits since 2002. Modeled after the California Sunshine Survey and subsequent state "FOI Audits," the Archive's FOIA Audits use open-government laws to test whether or not agencies are obeying those same laws. Recommendations from previous Archive FOIA Audits have led directly to laws and executive orders which have: set explicit customer service guidelines, mandated FOIA backlog reduction, assigned individualized FOIA tracking numbers, forced agencies to report the average number of days needed to process requests, and revealed the (often embarrassing) ages of the oldest pending FOIA requests. The surveys include:
  • Key Findings
  • Excuses Agencies Give for Poor E-Performance
  • Justice Department guidance undermines the statute. Currently, the FOIA stipulates that documents "likely to become the subject of subsequent requests" must be posted by agencies somewhere in their electronic reading rooms. The Department of Justice's Office of Information Policy defines these records as "frequently requested records… or those which have been released three or more times to FOIA requesters." Of course, it is time-consuming for agencies to develop a system that keeps track of how often a record has been released, which is in part why agencies rarely do so and are often in breach of the law. Troublingly, both the current House and Senate FOIA bills include language that codifies the instructions from the Department of Justice. The National Security Archive believes the addition of this "three or more times" language actually harms the intent of the Freedom of Information Act as it will give agencies an easy excuse ("not requested three times yet!") not to proactively post documents that agency FOIA offices have already spent time, money, and energy processing. We have formally suggested alternate language requiring that agencies generally post "all records, regardless of form or format that have been released in response to a FOIA request."
  • Disabilities Compliance. Despite the E-FOIA Act, many government agencies do not embrace the idea of posting their FOIA responses online. The most common reason agencies give is that it is difficult to post documents in a format that complies with the Americans with Disabilities Act, also referred to as being "508 compliant," and the 1998 Amendments to the Rehabilitation Act that require federal agencies "to make their electronic and information technology (EIT) accessible to people with disabilities." E-Star agencies, however, have proven that 508 compliance is no barrier when the agency has a will to post. All documents posted on FOIAonline are 508 compliant, as are the documents posted by the Department of Defense and the Department of State. In fact, every document created electronically by the US government after 1998 should already be 508 compliant. Even old paper records that are scanned to be processed through FOIA can be made 508 compliant with just a few clicks in Adobe Acrobat, according to this Department of Homeland Security guide (essentially OCRing the text, and including information about where non-textual fields appear). Even if agencies are insistent it is too difficult to OCR older documents that were scanned from paper, they cannot use that excuse with digital records.
  • Privacy. Another commonly articulated concern about posting FOIA releases online is that doing so could inadvertently disclose private information from "first person" FOIA requests. This is a valid concern, and this subset of FOIA requests should not be posted online. (The Justice Department identified "first party" requester rights in 1989. Essentially agencies cannot use the b(6) privacy exemption to redact information if a person requests it for him or herself. An example of a "first person" FOIA would be a person's request for his own immigration file.) Cost and Waste of Resources. There is also a belief that there is little public interest in the majority of FOIA requests processed, and hence it is a waste of resources to post them. This thinking runs counter to the governing principle of the Freedom of Information Act: that government information belongs to US citizens, not US agencies. As such, the reason that a person requests information is immaterial as the agency processes the request; the "interest factor" of a document should also be immaterial when an agency is required to post it online. Some think that posting FOIA releases online is not cost effective. In fact, the opposite is true. It's not cost effective to spend tens (or hundreds) of person hours to search for, review, and redact FOIA requests only to mail it to the requester and have them slip it into their desk drawer and forget about it. That is a waste of resources. The released document should be posted online for any interested party to utilize. This will only become easier as FOIA processing systems evolve to automatically post the documents they track. The State Department earned its "E-Star" status demonstrating this very principle, and spent no new funds and did not hire contractors to build its Electronic Reading Room, instead it built a self-sustaining platform that will save the agency time and money going forward.
Gonzalo San Gil, PhD.

Linux Security Guide (extended version) - Linux Audit - 0 views

  •  
    "With so many articles about Linux security on the internet, you may feel overwhelmed by how to properly secure your Linux systems. With this guide, we walk through different steps, tools, and resources. The main goal is to have you make an educated choice on what security defenses to implement on Linux. For this reason, this article won't show any specific configuration values, as it would implicate a possible best value. Instead, related articles and resources will be available in the text. The goal is to make this guide into a go-to article for when you need to secure your Linux installation. If you like this article, help others and share it on your favorite social media channels. Got feedback? Use the comments at the bottom. This document in work in progress and last updated in September 2016"
Paul Merrell

Cell Phone Guide For US Protesters, Updated 2014 Edition | Electronic Frontier Foundation - 0 views

  • With major protests in the news again, we decided it's time to update our cell phone guide for protestors. A lot has changed since we last published this report in 2011, for better and for worse. On the one hand, we've learned more about the massive volume of law enforcement requests for cell phone—ranging from location information to actual content—and widespread use of dedicated cell phone surveillance technologies. On the other hand, strong Supreme Court opinions have eliminated any ambiguity about the unconstitutionality of warrantless searches of phones incident to arrest, and a growing national consensus says location data, too, is private. Protesters want to be able to communicate, to document the protests, and to share photos and video with the world. So they'll be carrying phones, and they'll face a complex set of considerations about the privacy of the data those phones hold. We hope this guide can help answer some questions about how to best protect that data, and what rights protesters have in the face of police demands.
Paul Merrell

Banning end-to-end encryption being considered by Trump team- 9to5Mac - 0 views

  • The Trump administration is considering the possibility of banning end-to-end encryption, as used by services like Apple’s Messages and FaceTime, as well as competing platforms like WhatsApp and Signal. The topic was reportedly the main topic of a previously-unreported meeting of a National Security Council meeting on Wednesday … NordVPN Politico cites three sources for the story. Senior Trump administration officials met on Wednesday to discuss whether to seek legislation prohibiting tech companies from using forms of encryption that law enforcement can’t break — a provocative step that would reopen a long-running feud between federal authorities and Silicon Valley. The encryption challenge, which the government calls “going dark,” was the focus of a National Security Council meeting Wednesday morning that included the No. 2 officials from several key agencies, according to three people familiar with the matter. The meeting reportedly discussed two options. Senior officials debated whether to ask Congress to effectively outlaw end-to-end encryption, which scrambles data so that only its sender and recipient can read it […] “The two paths were to either put out a statement or a general position on encryption, and [say] that they would continue to work on a solution, or to ask Congress for legislation,” said one of the people. No decision was reached given strongly opposing views within the government.
Paul Merrell

Guest Post: NSA Reform - The Consequences of Failure | Just Security - 0 views

  • In the absence of real reform, people and institutions at home and abroad are taking matters into their own hands. In America, the NSA’s overreach is changing the way we communicate with and relate to each other. In order to evade government surveillance, more and more Americans are employing encryption technology.  The veritable explosion of new secure messaging apps like Surespot, OpenWhisper’s collaboration with WhatsApp, the development and deployment of open source anti-surveillance tools like Detekt, the creation of organizationally-sponsored “surveillance self-defense” guides, the push to universalize the https protocol, anti-surveillance book events featuring free encryption workshops— are manifestations of the rise of the personal encryption and pro-privacy digital resistance movement. Its political implications are clear: Americans, along with people around the world, increasingly see the United States government’s overreaching surveillance activities as a threat to be blocked.
  • The federal government’s vacuum-cleaner approach to surveillance—manifested in Title II of the PATRIOT Act, the FISA Amendments Act, and EO 12333—has backfired in these respects, and the emergence of this digital resistance movement is one result. Indeed, the existence and proliferation of social networks hold the potential to help this movement spread faster and to more of the general public than would have been possible in decades past. This is evidenced by the growing concern worldwide about governments’ ability to access reams of information about people’s lives with relative ease. As one measure, compared to a year ago, 41% of online users in North America now avoid certain Internet sites and applications, 16% change who they communicate with, and 24% censor what they say online. Those numbers, if anywhere close to accurate, are a major concern for democratic society.
  • Even if commercially available privacy technology proves capable of providing a genuine shield against warrantless or otherwise illegal surveillance by the United States government, it will remain a treatment for the symptom, not a cure for the underlying legal and constitutional malady. In April 2014, a Harris poll of US adults showed that in response to the Snowden revelations, “Almost half of respondents (47%) said that they have changed their online behavior and think more carefully about where they go, what they say, and what they do online.” Set aside for a moment that just the federal government’s collection of the data of innocent Americans is itself likely a violation of the Fourth Amendment. The Harris poll is just one of numerous studies highlighting the collateral damage to American society and politics from NSA’s excesses: segments of our population are now fearful of even associating with individuals or organizations executive branch officials deem controversial or suspicious. Nearly half of Americans say they have changed their online behavior out of a fear of what the federal government might do with their personal information. The Constitution’s free association guarantee has been damaged by the Surveillance State’s very operation.
  • ...1 more annotation...
  • The failure of the Congress and the courts to end the surveillance state, despite the repeated efforts by a huge range of political and public interest actors to effect that change through the political process, is only fueling the growing resistance movement. Federal officials understand this, which is why they are trying—desperately and in the view of some, underhandedly—to shut down this digital resistance movement. This action/reaction cycle is exactly what it appears to be: an escalating conflict between the American public and its government. Without comprehensive surveillance authority reforms (including a journalist “shield law” and ironclad whistleblower protections for Intelligence Community contractors) that are verifiable and enforceable, that conflict will only continue.
Paul Merrell

After Two Years, White House Finally Responds to Snowden Pardon Petition - With a "No" - 1 views

  • The White House on Tuesday ended two years of ignoring a hugely popular whitehouse.gov petition calling for NSA whistleblower Edward Snowden to be “immediately issued a full, free, and absolute pardon,” saying thanks for signing, but no. “We live in a dangerous world,” Lisa Monaco, President Obama’s adviser on homeland security and terrorism, said in a statement. More than 167,000 people signed the petition, which surpassed the 100,000 signatures that the White House’s “We the People” website said would garner a guaranteed response on June 24, 2013. In Tuesday’s response, the White House acknowledged that “This is an issue that many Americans feel strongly about.”
  • The White House on Tuesday ended two years of ignoring a hugely popular whitehouse.gov petition calling for NSA whistleblower Edward Snowden to be “immediately issued a full, free, and absolute pardon,” saying thanks for signing, but no. “We live in a dangerous world,” Lisa Monaco, President Obama’s adviser on homeland security and terrorism, said in a statement. More than 167,000 people signed the petition, which surpassed the 100,000 signatures that the White House’s “We the People” website said would garner a guaranteed response on June 24, 2013. In Tuesday’s response, the White House acknowledged that “This is an issue that many Americans feel strongly about.”
  • Monaco then explained her position: “Instead of constructively addressing these issues, Mr. Snowden’s dangerous decision to steal and disclose classified information had severe consequences for the security of our country and the people who work day in and day out to protect it.” Snowden didn’t actually disclose any classified information — news organizations including the Guardian, Washington Post, New York Times and The Intercept did the disclosing. And the Obama administration has yet to specify any “severe consequences” that can be independently confirmed.
  • ...1 more annotation...
  • The Snowden response was one of 20 responses to what the White House called “our We the People backlog.” The White House had been criticized for avoiding uncomfortable topics despite their popular support. On Twitter, the responses to the Snowden response, some from signers of the petition, were highly critica
Gonzalo San Gil, PhD.

Guide to Creative Commons » OAPEN-UK - 1 views

  •  
    "An output of the OAPEN-UK project, this guide explores concerns expressed in public evidence given by researchers, learned societies and publishers to inquiries in the House of Commons and the House of Lords, and also concerns expressed by researchers working with the OAPEN-UK project. We have also identified a number of common questions and have drafted answers, which have been checked by experts including Creative Commons. The guide has been edited by active researchers, to make sure that it is relevant and useful to academics faced with making decisions about publishing."
Paul Merrell

Long-Secret Stingray Manuals Detail How Police Can Spy on Phones - 0 views

  • Harris Corp.’s Stingray surveillance device has been one of the most closely guarded secrets in law enforcement for more than 15 years. The company and its police clients across the United States have fought to keep information about the mobile phone-monitoring boxes from the public against which they are used. The Intercept has obtained several Harris instruction manuals spanning roughly 200 pages and meticulously detailing how to create a cellular surveillance dragnet. Harris has fought to keep its surveillance equipment, which carries price tags in the low six figures, hidden from both privacy activists and the general public, arguing that information about the gear could help criminals. Accordingly, an older Stingray manual released under the Freedom of Information Act to news website TheBlot.com last year was almost completely redacted. So too have law enforcement agencies at every level, across the country, evaded almost all attempts to learn how and why these extremely powerful tools are being used — though court battles have made it clear Stingrays are often deployed without any warrant. The San Bernardino Sheriff’s Department alone has snooped via Stingray, sans warrant, over 300 times.
  • The documents described and linked below, instruction manuals for the software used by Stingray operators, were provided to The Intercept as part of a larger cache believed to have originated with the Florida Department of Law Enforcement. Two of them contain a “distribution warning” saying they contain “Proprietary Information and the release of this document and the information contained herein is prohibited to the fullest extent allowable by law.”  Although “Stingray” has become a catch-all name for devices of its kind, often referred to as “IMSI catchers,” the manuals include instructions for a range of other Harris surveillance boxes, including the Hailstorm, ArrowHead, AmberJack, and KingFish. They make clear the capability of those devices and the Stingray II to spy on cellphones by, at minimum, tracking their connection to the simulated tower, information about their location, and certain “over the air” electronic messages sent to and from them. Wessler added that parts of the manuals make specific reference to permanently storing this data, something that American law enforcement has denied doing in the past.
  • One piece of Windows software used to control Harris’s spy boxes, software that appears to be sold under the name “Gemini,” allows police to track phones across 2G, 3G, and LTE networks. Another Harris app, “iDen Controller,” provides a litany of fine-grained options for tracking phones. A law enforcement agent using these pieces of software along with Harris hardware could not only track a large number of phones as they moved throughout a city but could also apply nicknames to certain phones to keep track of them in the future. The manual describing how to operate iDEN, the lengthiest document of the four at 156 pages, uses an example of a target (called a “subscriber”) tagged alternately as Green Boy and Green Ben:
  • ...2 more annotations...
  • In order to maintain an uninterrupted connection to a target’s phone, the Harris software also offers the option of intentionally degrading (or “redirecting”) someone’s phone onto an inferior network, for example, knocking a connection from LTE to 2G:
  • A video of the Gemini software installed on a personal computer, obtained by The Intercept and embedded below, provides not only an extensive demonstration of the app but also underlines how accessible the mass surveillance code can be: Installing a complete warrantless surveillance suite is no more complicated than installing Skype. Indeed, software such as Photoshop or Microsoft Office, which require a registration key or some other proof of ownership, are more strictly controlled by their makers than software designed for cellular interception.
1 - 20 of 62 Next › Last »
Showing 20 items per page