Skip to main content

Home/ Future of the Web/ Group items tagged extreme

Rss Feed Group items tagged

Paul Merrell

US pushing local cops to stay mum on surveillance - Yahoo News - 0 views

  • WASHINGTON (AP) -- The Obama administration has been quietly advising local police not to disclose details about surveillance technology they are using to sweep up basic cellphone data from entire neighborhoods, The Associated Press has learned. Citing security reasons, the U.S. has intervened in routine state public records cases and criminal trials regarding use of the technology. This has resulted in police departments withholding materials or heavily censoring documents in rare instances when they disclose any about the purchase and use of such powerful surveillance equipment. Federal involvement in local open records proceedings is unusual. It comes at a time when President Barack Obama has said he welcomes a debate on government surveillance and called for more transparency about spying in the wake of disclosures about classified federal surveillance programs.
  • One well-known type of this surveillance equipment is known as a Stingray, an innovative way for law enforcement to track cellphones used by suspects and gather evidence. The equipment tricks cellphones into identifying some of their owners' account information, like a unique subscriber number, and transmitting data to police as if it were a phone company's tower. That allows police to obtain cellphone information without having to ask for help from service providers, such as Verizon or AT&T, and can locate a phone without the user even making a call or sending a text message. But without more details about how the technology works and under what circumstances it's used, it's unclear whether the technology might violate a person's constitutional rights or whether it's a good investment of taxpayer dollars. Interviews, court records and public-records requests show the Obama administration is asking agencies to withhold common information about the equipment, such as how the technology is used and how to turn it on. That pushback has come in the form of FBI affidavits and consultation in local criminal cases.
  • "These extreme secrecy efforts are in relation to very controversial, local government surveillance practices using highly invasive technology," said Nathan Freed Wessler, a staff attorney with the American Civil Liberties Union, which has fought for the release of these types of records. "If public participation means anything, people should have the facts about what the government is doing to them." Harris Corp., a key manufacturer of this equipment, built a secrecy element into its authorization agreement with the Federal Communications Commission in 2011. That authorization has an unusual requirement: that local law enforcement "coordinate with the FBI the acquisition and use of the equipment." Companies like Harris need FCC authorization in order to sell wireless equipment that could interfere with radio frequencies. A spokesman from Harris Corp. said the company will not discuss its products for the Defense Department and law enforcement agencies, although public filings showed government sales of communications systems such as the Stingray accounted for nearly one-third of its $5 billion in revenue. "As a government contractor, our solutions are regulated and their use is restricted," spokesman Jim Burke said.
  • ...4 more annotations...
  • Local police agencies have been denying access to records about this surveillance equipment under state public records laws. Agencies in San Diego, Chicago and Oakland County, Michigan, for instance, declined to tell the AP what devices they purchased, how much they cost and with whom they shared information. San Diego police released a heavily censored purchasing document. Oakland officials said police-secrecy exemptions and attorney-client privilege keep their hands tied. It was unclear whether the Obama administration interfered in the AP requests. "It's troubling to think the FBI can just trump the state's open records law," said Ginger McCall, director of the open government project at the Electronic Privacy Information Center. McCall suspects the surveillance would not pass constitutional muster. "The vast amount of information it sweeps in is totally irrelevant to the investigation," she said.
  • A court case challenging the public release of information from the Tucson Police Department includes an affidavit from an FBI special agent, Bradley Morrison, who said the disclosure would "result in the FBI's inability to protect the public from terrorism and other criminal activity because through public disclosures, this technology has been rendered essentially useless for future investigations." Morrison said revealing any information about the technology would violate a federal homeland security law about information-sharing and arms-control laws — legal arguments that that outside lawyers and transparency experts said are specious and don't comport with court cases on the U.S. Freedom of Information Act. The FBI did not answer questions about its role in states' open records proceedings.
  • But a former Justice Department official said the federal government should be making this argument in federal court, not a state level where different public records laws apply. "The federal government appears to be attempting to assert a federal interest in the information being sought, but it's going about it the wrong way," said Dan Metcalfe, the former director of the Justice Department's office of information and privacy. Currently Metcalfe is the executive director of American University's law school Collaboration on Government Secrecy project. A criminal case in Tallahassee cites the same homeland security laws in Morrison's affidavit, court records show, and prosecutors told the court they consulted with the FBI to keep portions of a transcript sealed. That transcript, released earlier this month, revealed that Stingrays "force" cellphones to register their location and identifying information with the police device and enables officers to track calls whenever the phone is on.
  • One law enforcement official familiar with the Tucson lawsuit, who spoke on condition of anonymity because the official was not authorized to speak about internal discussions, said federal lawyers told Tucson police they couldn't hand over a PowerPoint presentation made by local officers about how to operate the Stingray device. Federal officials forwarded Morrison's affidavit for use in the Tucson police department's reply to the lawsuit, rather than requesting the case be moved to federal court. In Sarasota, Florida, the U.S. Marshals Service confiscated local records on the use of the surveillance equipment, removing the documents from the reach of Florida's expansive open-records law after the ACLU asked under Florida law to see the documents. The ACLU has asked a judge to intervene. The Marshals Service said it deputized the officer as a federal agent and therefore the records weren't accessible under Florida law.
  •  
    The Florida case is particularly interesting because Florida is within the jurisdiction of the U.S. Eleventh Circuit Court of Appeals, which has just ruled that law enforcement must obtain a search warrant from a court before using equipment to determine a cell phone's location.  
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

The Fundamentals of US Surveillance: What Edward Snowden Never Told Us? | Global Resear... - 0 views

  • Former US intelligence contractor Edward Snowden’s revelations rocked the world.  According to his detailed reports, the US had launched massive spying programs and was scrutinizing the communications of American citizens in a manner which could only be described as extreme and intense. The US’s reaction was swift and to the point. “”Nobody is listening to your telephone calls,” President Obama said when asked about the NSA. As quoted in The Guardian,  Obama went on to say that surveillance programs were “fully overseen not just by Congress but by the Fisa court, a court specially put together to evaluate classified programs to make sure that the executive branch, or government generally, is not abusing them”. However, it appears that Snowden may have missed a pivotal part of the US surveillance program. And in stating that the “nobody” is not listening to our calls, President Obama may have been fudging quite a bit.
  • In fact, Great Britain maintains a “listening post” at NSA HQ. The laws restricting live wiretaps do not apply to foreign countries  and thus this listening post  is not subject to  US law.  In other words, the restrictions upon wiretaps, etc. do not apply to the British listening post.  So when Great Britain hands over the recordings to the NSA, technically speaking, a law is not being broken and technically speaking, the US is not eavesdropping on our each and every call. It is Great Britain which is doing the eavesdropping and turning over these records to US intelligence. According to John Loftus, formerly an attorney with  the Department of Justice and author of a number of books concerning US intelligence activities, back in the late seventies  the USDOJ issued a memorandum proposing an amendment to FISA. Loftus, who recalls seeing  the memo, stated in conversation this week that the DOJ proposed inserting the words “by the NSA” into the FISA law  so the scope of the law would only restrict surveillance by the NSA, not by the British.  Any subsequent sharing of the data culled through the listening posts was strictly outside the arena of FISA. Obama was less than forthcoming when he insisted that “What I can say unequivocally is that if you are a US person, the NSA cannot listen to your telephone calls, and the NSA cannot target your emails … and have not.”
  • According to Loftus, the NSA is indeed listening as Great Britain is turning over the surveillance records en masse to that agency. Loftus states that the arrangement is reciprocal, with the US maintaining a parallel listening post in Great Britain. In an interview this past week, Loftus told this reporter that  he believes that Snowden simply did not know about the arrangement between Britain and the US. As a contractor, said Loftus, Snowden would not have had access to this information and thus his detailed reports on the extent of US spying, including such programs as XKeyscore, which analyzes internet data based on global demographics, and PRISM, under which the telecommunications companies, such as Google, Facebook, et al, are mandated to collect our communications, missed the critical issue of the FISA loophole.
  • ...2 more annotations...
  • U.S. government officials have defended the program by asserting it cannot be used on domestic targets without a warrant. But once again, the FISA courts and their super-secret warrants  do not apply to foreign government surveillance of US citizens. So all this sturm and drang about whether or not the US is eavesdropping on our communications is, in fact, irrelevant and diversionary.
  • In fact, the USA Freedom Act reinstituted a number of the surveillance protocols of Section 215, including  authorization for  roving wiretaps  and tracking “lone wolf terrorists.”  While mainstream media heralded the passage of the bill as restoring privacy rights which were shredded under 215, privacy advocates have maintained that the bill will do little, if anything, to reverse the  surveillance situation in the US. The NSA went on the record as supporting the Freedom Act, stating it would end bulk collection of telephone metadata. However, in light of the reciprocal agreement between the US and Great Britain, the entire hoopla over NSA surveillance, Section 215, FISA courts and the USA Freedom Act could be seen as a giant smokescreen. If Great Britain is collecting our real time phone conversations and turning them over to the NSA, outside the realm or reach of the above stated laws, then all this posturing over the privacy rights of US citizens and surveillance laws expiring and being resurrected doesn’t amount to a hill of CDs.
Gary Edwards

Ajaxian » Making creating DOM-based applications less of a hassle - 0 views

  • Dojo also has an implementation of the Django templating language, dojox.dtl. This is an extremely powerful template engine that, similar to this one, creates the HTML once, then updates it when the data changes. You simply update the data, call the template.render method, and the HTML is updated - no creating nodes repeatedly, no innerHTML or nodeValue access.
  •  
    a framework for JavaScript applications called ViewsHandler. ViewsHandler is not another JavaScript templating solution but works on the assumption that in most cases you'll have to create a lot of HTML initially but you'll only have to change the content of some elements dynamically as new information gets loaded or users interact with the app. So instead of creating a lot of HTML over and over again all I wanted to provide is a way to create all the needed HTML upfront and then have easy access to the parts of the HTML that need updating. The first thing you'll need to do to define your application is to create an object with the different views and pointers to the methods that populate the views:
Gary Edwards

Wolfram Alpha is Coming -- and It Could be as Important as Google | Twine - 0 views

  • The first question was could (or even should) Wolfram Alpha be built using the Semantic Web in some manner, rather than (or as well as) the Mathematica engine it is currently built on. Is anything missed by not building it with Semantic Web's languages (RDF, OWL, Sparql, etc.)? The answer is that there is no reason that one MUST use the Semantic Web stack to build something like Wolfram Alpha. In fact, in my opinion it would be far too difficult to try to explicitly represent everything Wolfram Alpha knows and can compute using OWL ontologies. It is too wide a range of human knowledge and giant OWL ontologies are just too difficult to build and curate.
  • However for the internal knowledge representation and reasoning that takes places in the system, it appears Wolfram has found a pragmatic and efficient representation of his own, and I don't think he needs the Semantic Web at that level. It seems to be doing just fine without it. Wolfram Alpha is built on hand-curated knowledge and expertise. Wolfram and his team have somehow figured out a way to make that practical where all others who have tried this have failed to achieve their goals. The task is gargantuan -- there is just so much diverse knowledge in the world. Representing even a small segment of it formally turns out to be extremely difficult and time-consuming.
  • It has generally not been considered feasible for any one group to hand-curate all knowledge about every subject. This is why the Semantic Web was invented -- by enabling everyone to curate their own knowledge about their own documents and topics in parallel, in principle at least, more knowledge could be represented and shared in less time by more people -- in an interoperable manner. At least that is the vision of the Semantic Web.
  • ...1 more annotation...
  • Where Google is a system for FINDING things that we as a civilization collectively publish, Wolfram Alpha is for ANSWERING questions about what we as a civilization collectively know. It's the next step in the distribution of knowledge and intelligence around the world -- a new leap in the intelligence of our collective "Global Brain." And like any big next-step, Wolfram Alpha works in a new way -- it computes answers instead of just looking them up.
  •  
    A Computational Knowledge Engine for the Web In a nutshell, Wolfram and his team have built what he calls a "computational knowledge engine" for the Web. OK, so what does that really mean? Basically it means that you can ask it factual questions and it computes answers for you. It doesn't simply return documents that (might) contain the answers, like Google does, and it isn't just a giant database of knowledge, like the Wikipedia. It doesn't simply parse natural language and then use that to retrieve documents, like Powerset, for example. Instead, Wolfram Alpha actually computes the answers to a wide range of questions -- like questions that have factual answers such as "What country is Timbuktu in?" or "How many protons are in a hydrogen atom?" or "What is the average rainfall in Seattle this month?," "What is the 300th digit of Pi?," "where is the ISS?" or "When was GOOG worth more than $300?" Think about that for a minute. It computes the answers. Wolfram Alpha doesn't simply contain huge amounts of manually entered pairs of questions and answers, nor does it search for answers in a database of facts. Instead, it understands and then computes answers to certain kinds of questions.
Paul Merrell

Japan, U.S. trade chiefs seek to clinch bilateral TPP deal - 毎日新聞 - 0 views

  • Talks on the TPP, which would create a massive free trade zone encompassing some 40 percent of global output, have long been stalled due partly to bickering between Japan and the United States -- the biggest economies in the TPP framework -- over removal of barriers for agricultural and automotive trade. The biggest sticking point has been Tokyo's proposed exceptions to tariff cuts on its five sensitive farm product categories -- rice, wheat, beef and pork, dairy products and sugar -- and safeguard measures it wants to introduce should imports of the products surge under the TPP, which aims for zero tariffs in principle. It is uncertain how much closer the two sides can move given that their recent working-level talks saw little progress, negotiation sources said.
  • A summit meeting of the Asia-Pacific Economic Cooperation forum scheduled for November in Beijing that Obama and leaders from other TPP countries are slated to join is seen as an occasion for concluding the TPP talks, which have entered their fifth year. But the odds on an agreement depend on whether Japan and the United States can bridge their gaps before that.
  • Hiroshi Oe, Japan's deputy chief TPP negotiator, has admitted that talks with his counterpart Wendy Cutler, Froman's top deputy, earlier this month in Tokyo made very little progress. One negotiation source said the hurdle for solving the outstanding bilateral problems is "extremely high," suggesting it is still premature to bring the talks to the ministerial level. Amari himself had been reluctant to hold a one-on-one meeting with Froman with the working-level negotiations failing to see enough progress. But he apparently decided to ramp up efforts in response to strong calls from Washington for arranging a meeting with Froman, who has said the two sides are "now at a critical juncture in this negotiation."
  • ...1 more annotation...
  • The TPP comprises Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, the United States and Vietnam.
  •  
    Get ready to fight TPP fast-tracking in member states. see also 'Wikileaks' free trade documents reveal 'drastic' Australian concessions.' Source: The Guardian. http://goo.gl/hicb5h Remember that in the U.S., only Senate ratification is required. The measure will not go before the House before implementation. 
Paul Merrell

Demand an End to Secret Copyright Trade Deals | EFF Action Center - 0 views

  • Senator Ron Wyden may hold the future of the Internet in his hands. Let's call on him to fix the secretive process that has led to trade deals carrying extreme copyright and digital privacy provisions.
  • As Senate Finance Committee Chair, Senator Wyden is under pressure to fast track trade agreements like the Trans-Pacific Partnership (TPP) agreement. But he has another option: to finally bring these deals out into the open. We call on him now to continue to stand up to big private interests and help ensure that our digital rights are protected.
Paul Merrell

FBI Flouts Obama Directive to Limit Gag Orders on National Security Letters - The Inter... - 0 views

  • Despite the post-Snowden spotlight on mass surveillance, the intelligence community’s easiest end-run around the Fourth Amendment since 2001 has been something called a National Security Letter. FBI agents can demand that an Internet service provider, telephone company or financial institution turn over its records on any number of people — without any judicial review whatsoever — simply by writing a letter that says the information is needed for national security purposes. The FBI at one point was cranking out over 50,000 such letters a year; by the latest count, it still issues about 60 a day. The letters look like this:
  • Recipients are legally required to comply — but it doesn’t stop there. They also aren’t allowed to mention the order to anyone, least of all the person whose data is being searched. Ever. That’s because National Security Letters almost always come with eternal gag orders. Here’s that part:
  • That means the NSL process utterly disregards the First Amendment as well. More than a year ago, President Obama announced that he was ordering the Justice Department to terminate gag orders “within a fixed time unless the government demonstrates a real need for further secrecy.” And on Feb. 3, when the Office of the Director of National Intelligence announced a handful of baby steps resulting from its “comprehensive effort to examine and enhance [its] privacy and civil liberty protections” one of the most concrete was — finally — to cap the gag orders: In response to the President’s new direction, the FBI will now presumptively terminate National Security Letter nondisclosure orders at the earlier of three years after the opening of a fully predicated investigation or the investigation’s close. Continued nondisclosures orders beyond this period are permitted only if a Special Agent in Charge or a Deputy Assistant Director determines that the statutory standards for nondisclosure continue to be satisfied and that the case agent has justified, in writing, why continued nondisclosure is appropriate.
  • ...6 more annotations...
  • Despite the use of the word “now” in that first sentence, however, the FBI has yet to do any such thing. It has not announced any such change, nor explained how it will implement it, or when. Media inquiries were greeted with stalling and, finally, a no comment — ostensibly on advice of legal counsel. “There is pending litigation that deals with a lot of the same questions you’re asking, out of the Ninth Circuit,” FBI spokesman Chris Allen told me. “So for now, we’ll just have to decline to comment.” FBI lawyers are working on a court filing for that case, and “it will address” the new policy, he said. He would not say when to expect it.
  • There is indeed a significant case currently before the federal appeals court in San Francisco. Oral arguments were in October. A decision could come any time. But in that case, the Electronic Frontier Foundation (EFF), which is representing two unnamed communications companies that received NSLs, is calling for the entire NSL statute to be thrown out as unconstitutional — not for a tweak to the gag. And it has a March 2013 district court ruling in its favor. “The gag is a prior restraint under the First Amendment, and prior restraints have to meet an extremely high burden,” said Andrew Crocker, a legal fellow at EFF. That means going to court and meeting the burden of proof — not just signing a letter. Or as the Cato Institute’s Julian Sanchez put it, “To have such a low bar for denying persons or companies the right to speak about government orders they have been served with is anathema. And it is not very good for accountability.”
  • In a separate case, a wide range of media companies (including First Look Media, the non-profit digital media venture that produces The Intercept) are supporting a lawsuit filed by Twitter, demanding the right to say specifically how many NSLs it has received. But simply releasing companies from a gag doesn’t assure the kind of accountability that privacy advocates are saying is required by the Constitution. “What the public has to remember is a NSL is asking for your information, but it’s not asking it from you,” said Michael German, a former FBI agent who is now a fellow with the Brennan Center for Justice. “The vast majority of these things go to the very large telecommunications and financial companies who have a large stake in maintaining a good relationship with the government because they’re heavily regulated entities.”
  • So, German said, “the number of NSLs that would be exposed as a result of the release of the gag order is probably very few. The person whose records are being obtained is the one who should receive some notification.” A time limit on gags going forward also raises the question of whether past gag orders will now be withdrawn. “Obviously there are at this point literally hundreds of thousands of National Security Letters that are more than three years old,” said Sanchez. Individual review is therefore unlikely, but there ought to be some recourse, he said. And the further back you go, “it becomes increasingly implausible that a significant percentage of those are going to entail some dire national security risk.” The NSL program has a troubled history. The absolute secrecy of the program and resulting lack of accountability led to systemic abuse as documented by repeated inspector-general investigations, including improperly authorized NSLs, factual misstatements in the NSLs, improper requests under NSL statutes, requests for information based on First Amendment protected activity, “after-the-fact” blanket NSLs to “cover” illegal requests, and hundreds of NSLs for “community of interest” or “calling circle” information without any determination that the telephone numbers were relevant to authorized national security investigations.
  • Obama’s own hand-selected “Review Group on Intelligence and Communications Technologies” recommended in December 2013 that NSLs should only be issued after judicial review — just like warrants — and that any gag should end within 180 days barring judicial re-approval. But FBI director James Comey objected to the idea, calling NSLs “a very important tool that is essential to the work we do.” His argument evidently prevailed with Obama.
  • NSLs have managed to stay largely under the American public’s radar. But, Crocker says, “pretty much every time I bring it up and give the thumbnail, people are shocked. Then you go into how many are issued every year, and they go crazy.” Want to send me your old NSL and see if we can set a new precedent? Here’s how to reach me. And here’s how to leak to me.
Paul Merrell

After Brit spies 'snoop' on families' lawyers, UK govt admits: We flouted human rights ... - 0 views

  • The British government has admitted that its practice of spying on confidential communications between lawyers and their clients was a breach of the European Convention on Human Rights (ECHR). Details of the controversial snooping emerged in November: lawyers suing Blighty over its rendition of two Libyan families to be tortured by the late and unlamented Gaddafi regime claimed Her Majesty's own lawyers seemed to have access to the defense team's emails. The families' briefs asked for a probe by the secretive Investigatory Powers Tribunal (IPT), a move that led to Wednesday's admission. "The concession the government has made today relates to the agencies' policies and procedures governing the handling of legally privileged communications and whether they are compatible with the ECHR," a government spokesman said in a statement to the media, via the Press Association. "In view of recent IPT judgments, we acknowledge that the policies applied since 2010 have not fully met the requirements of the ECHR, specifically Article 8. This includes a requirement that safeguards are made sufficiently public."
  • The guidelines revealed by the investigation showed that MI5 – which handles the UK's domestic security – had free reign to spy on highly private and sensitive lawyer-client conversations between April 2011 and January 2014. MI6, which handles foreign intelligence, had no rules on the matter either until 2011, and even those were considered void if "extremists" were involved. Britain's answer to the NSA, GCHQ, had rules against such spying, but they too were relaxed in 2011. "By allowing the intelligence agencies free rein to spy on communications between lawyers and their clients, the Government has endangered the fundamental British right to a fair trial," said Cori Crider, a director at the non-profit Reprieve and one of the lawyers for the Libyan families. "For too long, the security services have been allowed to snoop on those bringing cases against them when they speak to their lawyers. In doing so, they have violated a right that is centuries old in British common law. Today they have finally admitted they have been acting unlawfully for years."
  • Crider said it now seemed probable that UK snoopers had been listening in on the communications over the Libyan case. The British government hasn't admitted guilt, but it has at least acknowledged that it was doing something wrong – sort of. "It does not mean that there was any deliberate wrongdoing on the part of the security and intelligence agencies, which have always taken their obligation to protect legally privileged material extremely seriously," the government spokesman said. "Nor does it mean that any of the agencies' activities have prejudiced or in any way resulted in an abuse of process in any civil or criminal proceedings. The agencies will now work with the independent Interception of Communications Commissioner to ensure their policies satisfy all of the UK's human rights obligations." So that's all right, then.
  •  
    If you follow the "November" link you'[l learn that yes, indeed, the UK government lawyers were happily getting the content of their adversaries privileged attorney-client communications. Conspicuously, the promises of reform make no mention of what is surely a disbarment offense in the U.S. I doubt that it's different in the UK. Discovery rules of procedure strictly limit how parties may obtain information from the other side. Wiretapping the other side's lawyers is not a permitted from of discovery. Hopefully, at least the government lawyers in the case in which the misbehavior was discovered have been referred for disciplinary action.  
Paul Merrell

WASHINGTON: CIA admits it broke into Senate computers; senators call for spy chief's ou... - 0 views

  • An internal CIA investigation confirmed allegations that agency personnel improperly intruded into a protected database used by Senate Intelligence Committee staff to compile a scathing report on the agency’s detention and interrogation program, prompting bipartisan outrage and at least two calls for spy chief John Brennan to resign.“This is very, very serious, and I will tell you, as a member of the committee, someone who has great respect for the CIA, I am extremely disappointed in the actions of the agents of the CIA who carried out this breach of the committee’s computers,” said Sen. Saxby Chambliss, R-Ga., the committee’s vice chairman.
  • The rare display of bipartisan fury followed a three-hour private briefing by Inspector General David Buckley. His investigation revealed that five CIA employees, two lawyers and three information technology specialists improperly accessed or “caused access” to a database that only committee staff were permitted to use.Buckley’s inquiry also determined that a CIA crimes report to the Justice Department alleging that the panel staff removed classified documents from a top-secret facility without authorization was based on “inaccurate information,” according to a summary of the findings prepared for the Senate and House intelligence committees and released by the CIA.In other conclusions, Buckley found that CIA security officers conducted keyword searches of the emails of staffers of the committee’s Democratic majority _ and reviewed some of them _ and that the three CIA information technology specialists showed “a lack of candor” in interviews with Buckley’s office.
  • The inspector general’s summary did not say who may have ordered the intrusion or when senior CIA officials learned of it.Following the briefing, some senators struggled to maintain their composure over what they saw as a violation of the constitutional separation of powers between an executive branch agency and its congressional overseers.“We’re the only people watching these organizations, and if we can’t rely on the information that we’re given as being accurate, then it makes a mockery of the entire oversight function,” said Sen. Angus King, an independent from Maine who caucuses with the Democrats.The findings confirmed charges by the committee chairwoman, Sen. Dianne Feinstein, D-Calif., that the CIA intruded into the database that by agreement was to be used by her staffers compiling the report on the harsh interrogation methods used by the agency on suspected terrorists held in secret overseas prisons under the George W. Bush administration.The findings also contradicted Brennan’s denials of Feinstein’s allegations, prompting two panel members, Sens. Mark Udall, D-Colo., and Martin Heinrich, D-N.M., to demand that the spy chief resign.
  • ...7 more annotations...
  • Another committee member, Sen. Ron Wyden, D-Ore., and some civil rights groups called for a fuller investigation. The demands clashed with a desire by President Barack Obama, other lawmakers and the CIA to move beyond the controversy over the “enhanced interrogation program” after Feinstein releases her committee’s report, which could come as soon as next weekMany members demanded that Brennan explain his earlier denial that the CIA had accessed the Senate committee database.“Director Brennan should make a very public explanation and correction of what he said,” said Sen. Carl Levin, D-Mich. He all but accused the Justice Department of a coverup by deciding not to pursue a criminal investigation into the CIA’s intrusion.
  • “I thought there might have been information that was produced after the department reached their conclusion,” he said. “What I understand, they have all of the information which the IG has.”He hinted that the scandal goes further than the individuals cited in Buckley’s report.“I think it’s very clear that CIA people knew exactly what they were doing and either knew or should’ve known,” said Levin, adding that he thought that Buckley’s findings should be referred to the Justice Department.A person with knowledge of the issue insisted that the CIA personnel who improperly accessed the database “acted in good faith,” believing that they were empowered to do so because they believed there had been a security violation.“There was no malicious intent. They acted in good faith believing they had the legal standing to do so,” said the knowledgeable person, who asked not to be further identified because they weren’t authorized to discuss the issue publicly. “But it did not conform with the legal agreement reached with the Senate committee.”
  • Buckley’s findings clashed with denials by Brennan that he issued only hours after Feinstein’s blistering Senate speech.“As far as the allegations of, you know, CIA hacking into, you know, Senate computers, nothing could be further from the truth. I mean, we wouldn’t do that. I mean, that’s _ that’s just beyond the _ you know, the scope of reason in terms of what we would do,” he said in an appearance at the Council on Foreign Relations.White House Press Secretary Josh Earnest issued a strong defense of Brennan, crediting him with playing an “instrumental role” in the administration’s fight against terrorism, in launching Buckley’s investigation and in looking for ways to prevent such occurrences in the future.Earnest was asked at a news briefing whether there was a credibility issue for Brennan, given his forceful denial in March.“Not at all,” he replied, adding that Brennan had suggested the inspector general’s investigation in the first place. And, he added, Brennan had taken the further step of appointing the accountability board to review the situation and the conduct of those accused of acting improperly to “ensure that they are properly held accountable for that conduct.”
  • Feinstein called Brennan’s apology and his decision to submit Buckley’s findings to the accountability board “positive first steps.”“This IG report corrects the record and it is my understanding that a declassified report will be made available to the public shortly,” she said in a statement.“The investigation confirmed what I said on the Senate floor in March _ CIA personnel inappropriately searched Senate Intelligence Committee computers in violation of an agreement we had reached, and I believe in violation of the constitutional separation of powers,” she said.It was not clear why Feinstein didn’t repeat her charges from March that the agency also may have broken the law and had sought to “thwart” her investigation into the CIA’s use of waterboarding, which simulates drowning, sleep deprivation and other harsh interrogation methods _ tactics denounced by many experts as torture.
  • The allegations and the separate CIA charge that the committee staff removed classified documents from the secret CIA facility in Northern Virginia without authorization were referred to the Justice Department for investigation.The department earlier this month announced that it had found insufficient evidence on which to proceed with criminal probes into either matter “at this time.” Thursday, Justice Department officials declined comment.
  • In her speech, Feinstein asserted that her staff found the material _ known as the Panetta review, after former CIA Director Leon Panetta, who ordered it _ in the protected database and that the CIA discovered the staff had it by monitoring its computers in violation of the user agreement.The inspector general’s summary, which was prepared for the Senate and the House intelligence committees, didn’t identify the CIA personnel who had accessed the Senate’s protected database.Furthermore, it said, the CIA crimes report to the Justice Department alleging that panel staffers had removed classified materials without permission was grounded on inaccurate information. The report is believed to have been sent by the CIA’s then acting general counsel, Robert Eatinger, who was a legal adviser to the interrogation program.“The factual basis for the referral was not supported, as the author of the referral had been provided inaccurate information on which the letter was based,” said the summary, noting that the Justice Department decided not to pursue the issue.
  • Christopher Anders, senior legislative counsel with the American Civil Liberties Union, criticized the CIA announcement, saying that “an apology isn’t enough.”“The Justice Department must refer the (CIA) inspector general’s report to a federal prosecutor for a full investigation into any crimes by CIA personnel or contractors,” said Anders.
  •  
    And no one but the lowest ranking staffer knew anything about it, not even the CIA lawyer who made the criminal referral to the Justice Dept., alleging that the Senate Intelligence Committee had accessed classified documents it wasn't authorized to access. So the Justice Dept. announces that there's insufficient evidence to warrant a criminal investigation. As though the CIA lawyer's allegations were not based on the unlawful surveillance of the Senate Intelligence Committee's network.  Can't we just get an official announcement that Attorney General Holder has decided that there shall be a cover-up? 
Paul Merrell

Snowden: NSA employees routinely pass around intercepted nude photos | Ars Technica - 0 views

  • Edward Snowden has revealed that he witnessed “numerous instances” of National Security Agency (NSA) employees passing around nude photos that were intercepted “in the course of their daily work.” In a 17-minute interview with The Guardian filmed at a Moscow hotel and published on Thursday, the NSA whistleblower addressed numerous points, noting that he could “live with” being sent to the US prison facility at Guantanamo Bay, Cuba. He also again dismissed any notion that he was a Russian spy or agent—calling those allegations “bullshit.” If Snowden’s allegations of sexual photo distribution are true, they would be consistent with what the NSA has already reported. In September 2013, in a letter from the NSA’s Inspector General Dr. George Ellard to Sen. Chuck Grassley (R-IA), the agency outlined a handful of instances during which NSA agents admitted that they had spied on their former love interests. This even spawned a nickname within the agency, LOVEINT—a riff on HUMINT (human intelligence) or SIGINT (signals intelligence).
  • “You've got young enlisted guys, 18 to 22 years old,” Snowden said. “They've suddenly been thrust into a position of extraordinary responsibility where they now have access to all of your private records. In the course of their daily work they stumble across something that is completely unrelated to their work in any sort of necessary sense. For example, an intimate nude photo of someone in a sexually compromising position. But they're extremely attractive. “So what do they do? They turn around in their chair and show their co-worker. The co-worker says: ‘Hey that's great. Send that to Bill down the way.’ And then Bill sends it to George and George sends it to Tom. And sooner or later this person's whole life has been seen by all of these other people. It's never reported. Nobody ever knows about it because the auditing of these systems is incredibly weak. The fact that your private images, records of your private lives, records of your intimate moments have been taken from your private communications stream from the intended recipient and given to the government without any specific authorization without any specific need is itself a violation of your rights. Why is that in a government database?” Then Alan Rusbridger, The Guardian’s editor-in-chief, asked: “You saw instances of that happening?” “Yeah,” Snowden responded. “Numerous?” “It's routine enough, depending on the company that you keep, it could be more or less frequent. These are seen as the fringe benefits of surveillance positions."
Paul Merrell

UK ISPs to introduce jihadi and terror content reporting button | Technology | The Guar... - 0 views

  • Internet companies have agreed to do more to tackle extremist material online following negotiations led by Downing Street. The UK’s major Internet service providers – BT, Virgin, Sky and Talk Talk – have this week committed to host a public reporting button for terrorist material online, similar to the reporting button which allows the public to report child sexual exploitation. They have also agreed to ensure that terrorist and extremist material is captured by their filters to prevent children and young people coming across radicalising material. The UK is the only country in the world with a Counter Terrorism Internet Referral Unit (CITRU) - a 24/7 law enforcement unit, based in the Met, dedicated to identifying and taking down extreme graphic material as well as material that glorifies, incites and radicalises.
  •  
    Bookburning in the digital era.
Paul Merrell

BBC News - GCHQ's Robert Hannigan says tech firms 'in denial' on extremism - 0 views

  • Web giants such as Twitter, Facebook and WhatsApp have become "command-and-control networks... for terrorists and criminals", GCHQ's new head has said. Islamic State extremists had "embraced" the web but some companies remained "in denial" over the problem, Robert Hannigan wrote in the Financial Times. He called for them to do more to co-operate with security services. However, civil liberties campaigners said the companies were already working with the intelligence agencies. None of the major tech firms has yet responded to Mr Hannigan's comments.
  • GCHQ, terrorists, and the internet: what are the issues? GCHQ v tech firms: Internet reacts Change at the top for Britain's
  • Mr Hannigan said IS had "embraced the web as a noisy channel in which to promote itself, intimidate people, and radicalise new recruits." The "security of its communications" added another challenge to agencies such as GCHQ, he said - adding that techniques for encrypting - or digitally scrambling - messages "which were once the preserve of the most sophisticated criminals or nation states now come as standard". GCHQ and its sister agencies, MI5 and the Secret Intelligence Service, could not tackle these challenges "at scale" without greater support from the private sector, including the largest US technology companies which dominate the web, he wrote.
  •  
    What I want to know is what we're going to do with that NSA data center at Bluffdale, Utah, after the NSA is abolished? Maybe give it to the Internet Archive?
Paul Merrell

Thousands Join Legal Fight Against UK Surveillance - And You Can, Too - The Intercept - 1 views

  • Thousands of people are signing up to join an unprecedented legal campaign against the United Kingdom’s leading electronic surveillance agency. On Monday, London-based human rights group Privacy International launched an initiative enabling anyone across the world to challenge covert spying operations involving Government Communications Headquarters, or GCHQ, the National Security Agency’s British counterpart. The campaign was made possible following a historic court ruling earlier this month that deemed intelligence sharing between GCHQ and the NSA to have been unlawful because of the extreme secrecy shrouding it.
  • Consequently, members of the public now have a rare opportunity to take part in a lawsuit against the spying in the Investigatory Powers Tribunal, a special British court that handles complaints about surveillance operations conducted by law enforcement and intelligence agencies. Privacy International is allowing anyone who wants to participate to submit their name, email address and phone number through a page on its website. The group plans to use the details to lodge a case with GCHQ and the court that will seek to discover whether each participant’s emails or phone calls have been covertly obtained by the agency in violation of the privacy and freedom of expression provisions of the European Convention on Human Rights. If it is established that any of the communications have been unlawfully collected, the court could force GCHQ to delete them from its vast repositories of intercepted data.
  • By Tuesday evening, more than 10,000 people had already signed up to the campaign, a spokesman for Privacy International told The Intercept. In a statement announcing the campaign on Monday, Eric King, deputy director of Privacy International, said: “The public have a right to know if they were illegally spied on, and GCHQ must come clean on whose records they hold that they should never have had in the first place. “We have known for some time that the NSA and GCHQ have been engaged in mass surveillance, but never before could anyone explicitly find out if their phone calls, emails, or location histories were unlawfully shared between the U.S. and U.K. “There are few chances that people have to directly challenge the seemingly unrestrained surveillance state, but individuals now have a historic opportunity finally hold GCHQ accountable for their unlawful actions.”
Paul Merrell

It's A-OK for FBI agents to silence web giants, says appeals court * The Register - 1 views

  • Gagging orders in the FBI's National Security Letters are all above board and constitutional, a California court has ruled. These security letters are typically sent to internet giants demanding information on whoever is behind a username or email address. Crucially, these requests include clauses that prevent the organizations from warning specific subscribers that they are under surveillance by the Feds. Cloudflare and Credo Mobile aren't happy with that, and – with the help of rights warriors at the EFF – challenged the gagging orders. Despite earlier successes in their legal battle, the 9th US Circuit Court of Appeals ruled [PDF] on Monday that the gagging orders do not trample on First Amendment rights.
  • The FBI dishes out thousands of National Security Letters (NSLs) every year; they can simply be issued by a special agent in charge in a bureau field office, and don’t require judicial review. They allow the Feds to obtain the name, address, and records of any services used – but not the contents of conversations – plus billing records of a person, and forbid the hosting company from telling the subject, meaning those under investigation can’t challenge the decision. It used to be the case that companies couldn’t even mention the existence of the NSL system for fear of prosecution. However, in 2013 a US district court in San Francisco ruled that such extreme gagging violated the First Amendment. That decision came after Google, and later others, started publishing the number of NSL orders that had been received, in defiance of the law. In 2015 the Obama administration amended the law to allow companies limited rights to disclose NSL orders, and to set a three-year limit for the gagging order. It also set up a framework for companies to challenge the legitimacy of NSL subpoenas, and it was these changes that caused the appeals court verdict in favor of the government.
Paul Merrell

From Radio to Porn, British Spies Track Web Users' Online Identities - 1 views

  • HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the Internet.” Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media and news websites, search engines, chat forums, and blogs. The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into Internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
  • Amid a renewed push from the U.K. government for more surveillance powers, more than two dozen documents being disclosed today by The Intercept reveal for the first time several major strands of GCHQ’s existing electronic eavesdropping capabilities.
  • The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails and Internet browsing logs of Brits, Americans, and any other citizens — all without a court order or judicial warrant
  • ...17 more annotations...
  • A huge volume of the Internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis. Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events” — a term the agency uses to refer to metadata records — with about 10 billion new entries added every day. As of March 2009, the largest slice of data Black Hole held — 41 percent — was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.
  • Throughout this period, as smartphone sales started to boom, the frequency of people’s Internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data. By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it said would be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.”
  • A document from the GCHQ target analysis center (GTAC) shows the Black Hole repository’s structure.
  • The data is searched by GCHQ analysts in a hunt for behavior online that could be connected to terrorism or other criminal activity. But it has also served a broader and more controversial purpose — helping the agency hack into European companies’ computer networks. In the lead up to its secret mission targeting Netherlands-based Gemalto, the largest SIM card manufacturer in the world, GCHQ used MUTANT BROTH in an effort to identify the company’s employees so it could hack into their computers. The system helped the agency analyze intercepted Facebook cookies it believed were associated with Gemalto staff located at offices in France and Poland. GCHQ later successfully infiltrated Gemalto’s internal networks, stealing encryption keys produced by the company that protect the privacy of cell phone communications.
  • Similarly, MUTANT BROTH proved integral to GCHQ’s hack of Belgian telecommunications provider Belgacom. The agency entered IP addresses associated with Belgacom into MUTANT BROTH to uncover information about the company’s employees. Cookies associated with the IPs revealed the Google, Yahoo, and LinkedIn accounts of three Belgacom engineers, whose computers were then targeted by the agency and infected with malware. The hacking operation resulted in GCHQ gaining deep access into the most sensitive parts of Belgacom’s internal systems, granting British spies the ability to intercept communications passing through the company’s networks.
  • In March, a U.K. parliamentary committee published the findings of an 18-month review of GCHQ’s operations and called for an overhaul of the laws that regulate the spying. The committee raised concerns about the agency gathering what it described as “bulk personal datasets” being held about “a wide range of people.” However, it censored the section of the report describing what these “datasets” contained, despite acknowledging that they “may be highly intrusive.” The Snowden documents shine light on some of the core GCHQ bulk data-gathering programs that the committee was likely referring to — pulling back the veil of secrecy that has shielded some of the agency’s most controversial surveillance operations from public scrutiny. KARMA POLICE and MUTANT BROTH are among the key bulk collection systems. But they do not operate in isolation — and the scope of GCHQ’s spying extends far beyond them.
  • The agency operates a bewildering array of other eavesdropping systems, each serving its own specific purpose and designated a unique code name, such as: SOCIAL ANTHROPOID, which is used to analyze metadata on emails, instant messenger chats, social media connections and conversations, plus “telephony” metadata about phone calls, cell phone locations, text and multimedia messages; MEMORY HOLE, which logs queries entered into search engines and associates each search with an IP address; MARBLED GECKO, which sifts through details about searches people have entered into Google Maps and Google Earth; and INFINITE MONKEYS, which analyzes data about the usage of online bulletin boards and forums. GCHQ has other programs that it uses to analyze the content of intercepted communications, such as the full written body of emails and the audio of phone calls. One of the most important content collection capabilities is TEMPORA, which mines vast amounts of emails, instant messages, voice calls and other communications and makes them accessible through a Google-style search tool named XKEYSCORE.
  • As of September 2012, TEMPORA was collecting “more than 40 billion pieces of content a day” and it was being used to spy on people across Europe, the Middle East, and North Africa, according to a top-secret memo outlining the scope of the program. The existence of TEMPORA was first revealed by The Guardian in June 2013. To analyze all of the communications it intercepts and to build a profile of the individuals it is monitoring, GCHQ uses a variety of different tools that can pull together all of the relevant information and make it accessible through a single interface. SAMUEL PEPYS is one such tool, built by the British spies to analyze both the content and metadata of emails, browsing sessions, and instant messages as they are being intercepted in real time. One screenshot of SAMUEL PEPYS in action shows the agency using it to monitor an individual in Sweden who visited a page about GCHQ on the U.S.-based anti-secrecy website Cryptome.
  • Partly due to the U.K.’s geographic location — situated between the United States and the western edge of continental Europe — a large amount of the world’s Internet traffic passes through its territory across international data cables. In 2010, GCHQ noted that what amounted to “25 percent of all Internet traffic” was transiting the U.K. through some 1,600 different cables. The agency said that it could “survey the majority of the 1,600” and “select the most valuable to switch into our processing systems.”
  • According to Joss Wright, a research fellow at the University of Oxford’s Internet Institute, tapping into the cables allows GCHQ to monitor a large portion of foreign communications. But the cables also transport masses of wholly domestic British emails and online chats, because when anyone in the U.K. sends an email or visits a website, their computer will routinely send and receive data from servers that are located overseas. “I could send a message from my computer here [in England] to my wife’s computer in the next room and on its way it could go through the U.S., France, and other countries,” Wright says. “That’s just the way the Internet is designed.” In other words, Wright adds, that means “a lot” of British data and communications transit across international cables daily, and are liable to be swept into GCHQ’s databases.
  • A map from a classified GCHQ presentation about intercepting communications from undersea cables. GCHQ is authorized to conduct dragnet surveillance of the international data cables through so-called external warrants that are signed off by a government minister. The external warrants permit the agency to monitor communications in foreign countries as well as British citizens’ international calls and emails — for example, a call from Islamabad to London. They prohibit GCHQ from reading or listening to the content of “internal” U.K. to U.K. emails and phone calls, which are supposed to be filtered out from GCHQ’s systems if they are inadvertently intercepted unless additional authorization is granted to scrutinize them. However, the same rules do not apply to metadata. A little-known loophole in the law allows GCHQ to use external warrants to collect and analyze bulk metadata about the emails, phone calls, and Internet browsing activities of British people, citizens of closely allied countries, and others, regardless of whether the data is derived from domestic U.K. to U.K. communications and browsing sessions or otherwise. In March, the existence of this loophole was quietly acknowledged by the U.K. parliamentary committee’s surveillance review, which stated in a section of its report that “special protection and additional safeguards” did not apply to metadata swept up using external warrants and that domestic British metadata could therefore be lawfully “returned as a result of searches” conducted by GCHQ.
  • Perhaps unsurprisingly, GCHQ appears to have readily exploited this obscure legal technicality. Secret policy guidance papers issued to the agency’s analysts instruct them that they can sift through huge troves of indiscriminately collected metadata records to spy on anyone regardless of their nationality. The guidance makes clear that there is no exemption or extra privacy protection for British people or citizens from countries that are members of the Five Eyes, a surveillance alliance that the U.K. is part of alongside the U.S., Canada, Australia, and New Zealand. “If you are searching a purely Events only database such as MUTANT BROTH, the issue of location does not occur,” states one internal GCHQ policy document, which is marked with a “last modified” date of July 2012. The document adds that analysts are free to search the databases for British metadata “without further authorization” by inputing a U.K. “selector,” meaning a unique identifier such as a person’s email or IP address, username, or phone number. Authorization is “not needed for individuals in the U.K.,” another GCHQ document explains, because metadata has been judged “less intrusive than communications content.” All the spies are required to do to mine the metadata troves is write a short “justification” or “reason” for each search they conduct and then click a button on their computer screen.
  • Intelligence GCHQ collects on British persons of interest is shared with domestic security agency MI5, which usually takes the lead on spying operations within the U.K. MI5 conducts its own extensive domestic surveillance as part of a program called DIGINT (digital intelligence).
  • GCHQ’s documents suggest that it typically retains metadata for periods of between 30 days to six months. It stores the content of communications for a shorter period of time, varying between three to 30 days. The retention periods can be extended if deemed necessary for “cyber defense.” One secret policy paper dated from January 2010 lists the wide range of information the agency classes as metadata — including location data that could be used to track your movements, your email, instant messenger, and social networking “buddy lists,” logs showing who you have communicated with by phone or email, the passwords you use to access “communications services” (such as an email account), and information about websites you have viewed.
  • Records showing the full website addresses you have visited — for instance, www.gchq.gov.uk/what_we_do — are treated as content. But the first part of an address you have visited — for instance, www.gchq.gov.uk — is treated as metadata. In isolation, a single metadata record of a phone call, email, or website visit may not reveal much about a person’s private life, according to Ethan Zuckerman, director of Massachusetts Institute of Technology’s Center for Civic Media. But if accumulated and analyzed over a period of weeks or months, these details would be “extremely personal,” he told The Intercept, because they could reveal a person’s movements, habits, religious beliefs, political views, relationships, and even sexual preferences. For Zuckerman, who has studied the social and political ramifications of surveillance, the most concerning aspect of large-scale government data collection is that it can be “corrosive towards democracy” — leading to a chilling effect on freedom of expression and communication. “Once we know there’s a reasonable chance that we are being watched in one fashion or another it’s hard for that not to have a ‘panopticon effect,’” he said, “where we think and behave differently based on the assumption that people may be watching and paying attention to what we are doing.”
  • When compared to surveillance rules in place in the U.S., GCHQ notes in one document that the U.K. has “a light oversight regime.” The more lax British spying regulations are reflected in secret internal rules that highlight greater restrictions on how NSA databases can be accessed. The NSA’s troves can be searched for data on British citizens, one document states, but they cannot be mined for information about Americans or other citizens from countries in the Five Eyes alliance. No such constraints are placed on GCHQ’s own databases, which can be sifted for records on the phone calls, emails, and Internet usage of Brits, Americans, and citizens from any other country. The scope of GCHQ’s surveillance powers explain in part why Snowden told The Guardian in June 2013 that U.K. surveillance is “worse than the U.S.” In an interview with Der Spiegel in July 2013, Snowden added that British Internet cables were “radioactive” and joked: “Even the Queen’s selfies to the pool boy get logged.”
  • In recent years, the biggest barrier to GCHQ’s mass collection of data does not appear to have come in the form of legal or policy restrictions. Rather, it is the increased use of encryption technology that protects the privacy of communications that has posed the biggest potential hindrance to the agency’s activities. “The spread of encryption … threatens our ability to do effective target discovery/development,” says a top-secret report co-authored by an official from the British agency and an NSA employee in 2011. “Pertinent metadata events will be locked within the encrypted channels and difficult, if not impossible, to prise out,” the report says, adding that the agencies were working on a plan that would “(hopefully) allow our Internet Exploitation strategy to prevail.”
Paul Merrell

The punk rock internet - how DIY ​​rebels ​are working to ​replace the tech g... - 0 views

  • What they are doing could be seen as the online world’s equivalent of punk rock: a scattered revolt against an industry that many now think has grown greedy, intrusive and arrogant – as well as governments whose surveillance programmes have fuelled the same anxieties. As concerns grow about an online realm dominated by a few huge corporations, everyone involved shares one common goal: a comprehensively decentralised internet.
  • In the last few months, they have started working with people in the Belgian city of Ghent – or, in Flemish, Gent – where the authorities own their own internet domain, complete with .gent web addresses. Using the blueprint of Heartbeat, they want to create a new kind of internet they call the indienet – in which people control their data, are not tracked and each own an equal space online. This would be a radical alternative to what we have now: giant “supernodes” that have made a few men in northern California unimaginable amounts of money thanks to the ocean of lucrative personal information billions of people hand over in exchange for their services.
  • His alternative is what he calls the Safe network: the acronym stands for “Safe Access for Everyone”. In this model, rather than being stored on distant servers, people’s data – files, documents, social-media interactions – will be broken into fragments, encrypted and scattered around other people’s computers and smartphones, meaning that hacking and data theft will become impossible. Thanks to a system of self-authentication in which a Safe user’s encrypted information would only be put back together and unlocked on their own devices, there will be no centrally held passwords. No one will leave data trails, so there will be nothing for big online companies to harvest. The financial lubricant, Irvine says, will be a cryptocurrency called Safecoin: users will pay to store data on the network, and also be rewarded for storing other people’s (encrypted) information on their devices. Software developers, meanwhile, will be rewarded with Safecoin according to the popularity of their apps. There is a community of around 7,000 interested people already working on services that will work on the Safe network, including alternatives to platforms such as Facebook and YouTube.
  • ...3 more annotations...
  • Once MaidSafe is up and running, there will be very little any government or authority can do about it: “We can’t stop the network if we start it. If anyone turned round and said: ‘You need to stop that,’ we couldn’t. We’d have to go round to people’s houses and switch off their computers. That’s part of the whole thing. The network is like a cyber-brain; almost a lifeform in itself. And once you start it, that’s it.” Before my trip to Scotland, I tell him, I spent whole futile days signing up to some of the decentralised social networks that already exist – Steemit, Diaspora, Mastadon – and trying to approximate the kind of experience I can easily get on, say, Twitter or Facebook.
  • And herein lie two potential breakthroughs. One, according to some cryptocurrency enthusiasts, is a means of securing and protecting people’s identities that doesn’t rely on remotely stored passwords. The other is a hope that we can leave behind intermediaries such as Uber and eBay, and allow buyers and sellers to deal directly with each other. Blockstack, a startup based in New York, aims to bring blockchain technology to the masses. Like MaidSafe, its creators aim to build a new internet, and a 13,000-strong crowd of developers are already working on apps that either run on the platform Blockstack has created, or use its features. OpenBazaar is an eBay-esque service, up and running since November last year, which promises “the world’s most private, secure, and liberating online marketplace”. Casa aims to be an decentralised alternative to Airbnb; Guild is a would-be blogging service that bigs up its libertarian ethos and boasts that its founders will have “no power to remove blogs they don’t approve of or agree with”.
  • An initial version of Blockstack is already up and running. Even if data is stored on conventional drives, servers and clouds, thanks to its blockchain-based “private key” system each Blockstack user controls the kind of personal information we currently blithely hand over to Big Tech, and has the unique power to unlock it. “That’s something that’s extremely powerful – and not just because you know your data is more secure because you’re not giving it to a company,” he says. “A hacker would have to hack a million people if they wanted access to their data.”
Paul Merrell

Evidence of Google blacklisting of left and progressive sites continues to mount - Worl... - 0 views

  • A growing number of leading left-wing websites have confirmed that their search traffic from Google has plunged in recent months, adding to evidence that Google, under the cover of a fraudulent campaign against fake news, is implementing a program of systematic and widespread censorship. Truthout, a not-for-profit news website that focuses on political, social, and ecological developments from a left progressive standpoint, had its readership plunge by 35 percent since April. The Real News , a nonprofit video news and documentary service, has had its search traffic fall by 37 percent. Another site, Common Dreams , last week told the WSWS that its search traffic had fallen by up to 50 percent. As extreme as these sudden drops in search traffic are, they do not equal the nearly 70 percent drop in traffic from Google seen by the WSWS. “This is political censorship of the worst sort; it’s just an excuse to suppress political viewpoints,” said Robert Epstein, a former editor in chief of Psychology Today and noted expert on Google. Epstein said that at this point, the question was whether the WSWS had been flagged specifically by human evaluators employed by the search giant, or whether those evaluators had influenced the Google Search engine to demote left-wing sites. “What you don’t know is whether this was the human evaluators who are demoting you, or whether it was the new algorithm they are training,” Epstein said.
  • Richard Stallman, the world-renowned technology pioneer and a leader of the free software movement, said he had read the WSWS’s coverage on Google’s censorship of left-wing sites. He warned about the immense control exercised by Google over the Internet, saying, “For people’s main way of finding articles about a topic to be run by a giant corporation creates an obvious potential for abuse.” According to data from the search optimization tool SEMRush, search traffic to Mr. Stallman’s personal website, Stallman.org, fell by 24 percent, while traffic to gnu.org, operated by the Free Software Foundation, fell 19 percent. Eric Maas, a search engine optimization consultant working in the San Francisco Bay area, said his team has surveyed a wide range of alternative news sites affected by changes in Google’s algorithms since April.  “While the update may be targeting specific site functions, there is evidence that this update is promoting only large mainstream news organizations. What I find problematic with this is that it appears that some sites have been targeted and others have not.” The massive drop in search traffic to the WSWS and other left-wing sites followed the implementation of changes in Google’s search evaluation protocols. In a statement issued on April 25, Ben Gomes, the company’s vice president for engineering, stated that Google’s update of its search engine would block access to “offensive” sites, while working to surface more “authoritative content.” In a set of guidelines issued to Google evaluators in March, the company instructed its search evaluators to flag pages returning “conspiracy theories” or “upsetting” content unless “the query clearly indicates the user is seeking an alternative viewpoint.”
Paul Merrell

UK government is secretly planning to break encryption and spy on people's phones, reve... - 0 views

  • The UK government is secretly planning to force technology companies to build backdoors into their products, to enable intelligence agencies to read people’s private messages. A draft document leaked by the Open Rights Group details extreme new surveillance proposals, which would enable government agencies to spy on one in 10,000 citizens – around 6,500 people – at any one time.  The document, which follows the controversial Investigatory Powers Act, reveals government plans to force mobile operators and internet service providers to provide real-time communications of customers to the government “in an intelligible form”, and within one working day.
  • This would effectively ban encryption, an important security measure used by a wide range of companies, including WhatsApp and major banks, to keep people’s private data private and to protect them from hackers and cyber criminals. 
« First ‹ Previous 41 - 60 of 62 Next ›
Showing 20 items per page