With millions of iPhones out there, it makes sense to have your content, or application available on that platform, but how do you go about doing this? Where do you go to get started? And what are the steps you need to take to get there?
This article from interface designer, Etan Rozin, is an introduction to the various ways of getting content and applications onto the iPhone. It is by no means a full guide, but hopes to point you in the right direction and give you an overview of what is involved in the process.
Excellent explanation and collection of valuable resources!
You’re about to see the mother of all flamewars on internet groups where web developers hang out. It’ll make the Battle of Stalingrad look like that time your sister-in-law stormed out of afternoon tea at your grandmother’s and wrapped the Mustang around a tree.
The flame war will revolve around the issue of something called “web standards.”
The FCC chairman sent “inquiry letters” to Apple and AT&T on Saturday in order to get answers as to why the companies disapproved of a voice application developed by Google for the iPhone."The Federal Communications Commission has a mission to foster a competitive wireless marketplace, protect and empower consumers, and promote innovation and investment," said FCC chairman Julius Genachowski.The letters involve Apple and AT&T’s latest decision to deny a Google Voice application from being offered on Apple’s App Store for the iPhone.
At this point, AT&T holds a contract with Apple making it the exclusive wireless carrier to offer the iPhone.
Although both IE7 and IE8 include a "sandbox" defense dubbed "Protected Mode," the feature works only when the browsers are run in Vista (IE7 and IE8) or Windows 7 (IE8). Google's Chrome Frame, however, prevents malicious code from escaping the browser -- and worming its way into, say, the operating system -- on Windows XP as well.
Yesterday, Microsoft warned users that they would double their security problems by using Chrome Frame, the plug-in that provides better JavaScript performance and adds support for HTML 5 to Microsoft's browser.
Chrome Frame lets IE utilize the Chrome browser's WebKit rendering engine, as well as its high-performance V8 JavaScript engine. The extra speed and HTML 5 support are necessary, said Google earlier this week, if IE users are to run advanced Web applications such as Google Wave, a collaboration and communications tool that Google launched in May.Google pitched the plug-in as a way to instantly improve the performance of the notoriously slow IE, and as a way for Web developers to support standards IE can't handle, including HTML 5. The Chrome Frame plug-in works with IE6, IE7 and IE8 on Windows XP and Windows Vista
Yahoo! may be shedding more mojo.
According to All Things Digital, the web giant is looking to offload Zimbra, the open-source email and collaboration outfit it acquired just two years ago for $350m. Sources tell ATD that Comcast and Google are potential buyers.
According to the company, it's now powering more than 50 million paid mailboxes worldwide.
Zimbra tech is also part of Yahoo!'s web-based Mail and Calendar tools for consumer Yahooligans. And by Yahoo!'s count, the Zimbra open source community is now 30,000-developers strong, driving more than 50,000 open source downloads a month.
Much has been made of how HTML5 will "kill" proprietary media tools and players from Adobe Systems and Microsoft.
The idea has been partly predicated on the fact those working on HTML5 would enshrine a baseline spec for audio and video codecs everybody could agree on, buy into, and support.
But the hope of a universal media experience is now dead, at least for now. Apple, Mozilla, Opera, Microsoft, and - yes - Google could not agree on a common set of audio or video codecs for use in the proposed HTML5 spec.
That means major browsers and media player will continue to implement the codecs and APIs ordained by their owners as they’ve always done, leaving developers and customers to pick a side or go to the additional cost and effort of supporting different players.
At the Web 2.0 Expo, Tim O'Reilly predicts that Microsoft will emerge as a leading proponent of the open Web, despite the company's tradition of fostering its own proprietary operating systems and development languages. O'Reilly says Microsoft's recent deals to index Twitter tweets and use Wolfram Alpha's APIs for computational data show a shift in its willingness to work with other Web companies. Moreover, the Windows Azure cloud computing operating system is designed to work with open-source technology.
Yesterday, Microsoft announced the release of Version 1.0 technical documentation for Microsoft Office 2007, SharePoint 2007 and Exchange 2007 as an effort to drive greater interoperability and foster a stronger open relationship with their developer and partner communities. They also posted over 5000 pages of technical documentation on Microsoft Office Word, Excel and PowerPoint binary file formats on the MSDN site royalty-free basis under Microsoft’s Open Specification Promise (OSP).
Not long ago, Mozilla coders announced that they were starting to build PDF.js, a way to display Acrobat documents in the browser using pure web code. No longer will you have to fight with an external PDF plug-in in Firefox. Huzzah!
Development on PDF.js has progressed to the point now where you can take an early peek at it. The restart-free add-on is available from the GitHub repository — just download the .XPI in Firefox and click to install.
"Today at the first annual Code Conference, Microsoft demonstrated its new real-time translation in Skype publicly for the first time. Gurdeep Pall, Microsoft's VP of Skype and Lync, compares the technology to Star Trek's Universal Translator. During the demonstration, Pall converses in English with a coworker in Germany who is speaking German. 'Skype Translator results from decades of work by the industry, years of work by our researchers, and now is being developed jointly by the Skype and Microsoft Translator teams. The demo showed near real-time audio translation from English to German and vice versa, combining Skype voice and IM technologies with Microsoft Translator, and neural network-based speech recognition.'"
Haven't yet explored to see what's beneath the marketing hype. And I'm less than excited about the Skype with its NSA tendrils being the vehicle of audio translations of human languages. But given the progress in: [i] automated translations of human texts; [ii] audio screenreaders; and [iii] voice-to-text transcription, this is one we saw coming. Slap the three technologies together and wait until processing power catches up to what's needed to produce a marketable experience. After all, the StarTrek scriptwriters saw this coming too.
Ray Kurzweil, now at Google, should get a lot of the pioneer credit here. His revolutionary optical character recognition algorithms soon found themselves redeployed in text-to-speech synthesis and speech recognition technology. From Wikipedia: "Kurzweil was the principal inventor of the first CCD flatbed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first commercial text-to-speech synthesizer, the first music synthesizer Kurzweil K250 capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition."
Not bad for a guy the same age as my younger brother.
But Microsoft's announcement here may be more vaporware than hardware in production and lines of executable code. Microsoft has a long history of vaporware announcements to persuade potential customers to hold off on riding with the competition.
And the Softies undoubtedly know that Google's human language text translation capabilities are way out in front and that the voice to text and text to speech API methods have already found a comfortable home in Android and Chromebook.
What does Microsoft have that's ready to ship if anything? I'll check it out tomorrow.
NSA spies need jobs, too. And that is why many covert programs could be hiding in plain sight.
Job websites such as LinkedIn and Indeed.com contain hundreds of profiles that reference classified NSA efforts, posted by everyone from career government employees to low-level IT workers who served in Iraq or Afghanistan. They offer a rare glimpse into the intelligence community's projects and how they operate. Now some researchers are using the same kinds of big-data tools employed by the NSA to scrape public LinkedIn profiles for classified programs. But the presence of so much classified information in public view raises serious concerns about security — and about the intelligence industry as a whole.
“I’ve spent the past couple of years searching LinkedIn profiles for NSA programs,” said Christopher Soghoian, the principal technologist with the American Civil Liberties Union’s Speech, Privacy and Technology Project.
On Aug. 3, The Wall Street Journal published a story about the FBI’s growing use of hacking to monitor suspects, based on information Soghoian provided. The next day, Soghoian spoke at the Defcon hacking conference about how he uncovered the existence of the FBI’s hacking team, known as the Remote Operations Unit (ROU), using the LinkedIn profiles of two employees at James Bimen Associates, with which the FBI contracts for hacking operations.
“Had it not been for the sloppy actions of a few contractors updating their LinkedIn profiles, we would have never known about this,” Soghoian said in his Defcon talk. Those two contractors were not the only ones being sloppy.
“I was, like, huh, maybe there’s more we can do with this — actually get a list of all these profiles that have these results and use that to analyze the structure of which companies are helping with which programs, which people are helping with which programs, try to figure out in what capacity, and learn more about things that we might not know about,” McGrath said.
He set up a computer program called a scraper to search LinkedIn for public profiles that mention known NSA programs, contractors or jargon — such as SIGINT, the agency’s term for “signals intelligence” gleaned from intercepted communications. Once the scraper found the name of an NSA program, it searched nearby for other words in all caps. That allowed McGrath to find the names of unknown programs, too.
Once McGrath had the raw data — thousands of profiles in all, with 70 to 80 different program names — he created a network graph that showed the relationships between specific government agencies, contractors and intelligence programs. Of course, the data are limited to what people are posting on their LinkedIn profiles. Still, the network graph gives a sense of which contractors work on several NSA programs, which ones work on just one or two, and even which programs military units in Iraq and Afghanistan are using. And that is just the beginning.
And there are many more. A quick search of Indeed.com using three code names unlikely to return false positives — Dishfire, XKeyscore and Pinwale — turned up 323 résumés. The same search on LinkedIn turned up 48 profiles mentioning Dishfire, 18 mentioning XKeyscore and 74 mentioning Pinwale. Almost all these people appear to work in the intelligence industry.
Network-mapping the data
Fabio Pietrosanti of the Hermes Center for Transparency and Digital Human Rights noticed all the code names on LinkedIn last December. While sitting with M.C. McGrath at the Chaos Communication Congress in Hamburg, Germany, Pietrosanti began searching the website for classified program names — and getting serious results. McGrath was already developing Transparency Toolkit, a Web application for investigative research, and knew he could improve on Pietrosanti’s off-the-cuff methods.
Click on the image to view an interactive network illustration of the relationships between specific national security surveillance programs in red, and government organizations or private contractors in blue.
What a giggle, public spying on NSA and its contractors using Big Data. The interactive network graph with its sidebar display of relevant data derived from LinkedIn profiles is just too delightful.
Blowing the whistle on wrongdoing creates a moral frequency that vast numbers of people are eager to hear. We don’t want our lives, communities, country and world continually damaged by the deadening silences of fear and conformity.
I’ve met many whistleblowers over the years, and they’ve been extraordinarily ordinary. None were applying for halos or sainthood. All experienced anguish before deciding that continuous inaction had a price that was too high. All suffered negative consequences as well as relief after they spoke up and took action. All made the world better with their courage.
Whistleblowers don’t sign up to be whistleblowers. Almost always, they begin their work as true believers in the system that conscience later compels them to challenge.
“It took years of involvement with a mendacious war policy, evidence of which was apparent to me as early as 2003, before I found the courage to follow my conscience,” Matthew Hoh recalled this week.“It is not an easy or light decision for anyone to make, but we need members of our military, development, diplomatic and intelligence community to speak out if we are ever to have a just and sound foreign policy.”
Hoh describes his record this way:
“After over 11 continuous years of service with the U.S. military and U.S. government, nearly six of those years overseas, including service in Iraq and Afghanistan, as well as positions within the Secretary of the Navy’s Office as a White House Liaison, and as a consultant for the State Department’s Iraq Desk, I resigned from my position with the State Department in Afghanistan in protest of the escalation of war in 2009.”
Another former Department of State official, the ex-diplomat and retired Army colonel Ann Wright, who resigned in protest of the Iraq invasion in March 2003, is crossing paths with Hoh on Friday as they do the honors at a ribbon-cutting — half a block from the State Department headquarters in Washington — for a billboard with a picture of Pentagon Papers whistleblower Daniel Ellsberg. Big-lettered words begin by referring to the years he waited before releasing the Pentagon Papers in 1971. “Don’t do what I did,” Ellsberg says on the billboard.
“Don’t wait until a new war has started, don’t wait until thousands more have died, before you tell the truth with documents that reveal lies or crimes or internal projections of costs and dangers. You might save a war’s worth of lives.
The billboard – sponsored by the ExposeFacts organization, which launched this week — will spread to other prominent locations in Washington and beyond. As an organizer for ExposeFacts, I’m glad to report that outreach to potential whistleblowers is just getting started. (For details, visit ExposeFacts.org.) We’re propelled by the kind of hopeful determination that Hoh expressed the day before the billboard ribbon-cutting when he said: “I trust ExposeFacts and its efforts will encourage others to follow their conscience and do what is right.”
The journalist Kevin Gosztola, who has astutely covered a range of whistleblower issues for years, pointed this week to the imperative of opening up news media. “There is an important role for ExposeFacts to play in not only forcing more transparency, but also inspiring more media organizations to engage in adversarial journalism,” he wrote.
“Such journalism is called for in the face of wars, environmental destruction, escalating poverty, egregious abuses in the justice system, corporate control of government, and national security state secrecy. Perhaps a truly successful organization could inspire U.S. media organizations to play much more of a watchdog role than a lapdog role when covering powerful institutions in government.”
Overall, we desperately need to nurture and propagate a steadfast culture of outspoken whistleblowing. A central motto of the AIDS activist movement dating back to the 1980s – Silence = Death – remains urgently relevant in a vast array of realms. Whether the problems involve perpetual war, corporate malfeasance, climate change, institutionalized racism, patterns of sexual assault, toxic pollution or countless other ills, none can be alleviated without bringing grim realities into the light. “All governments lie,” Ellsberg says in a video statement released for the launch of ExposeFacts,
“and they all like to work in the dark as far as the public is concerned, in terms of their own decision-making, their planning — and to be able to allege, falsely, unanimity in addressing their problems, as if no one who had knowledge of the full facts inside could disagree with the policy the president or the leader of the state is announcing.”
Ellsberg adds:
“A country that wants to be a democracy has to be able to penetrate that secrecy, with the help of conscientious individuals who understand in this country that their duty to the Constitution and to the civil liberties and to the welfare of this country definitely surmount their obligation to their bosses, to a given administration, or in some cases to their promise of secrecy.”
Right now, our potential for democracy owes a lot to people like NSA whistleblowers William Binney and Kirk Wiebe, and EPA whistleblower Marsha Coleman-Adebayo. When they spoke at the June 4 news conference in Washington that launched ExposeFacts, their brave clarity was inspiring.
Antidotes to the poisons of cynicism and passive despair can emerge from organizing to help create a better world. The process requires applying a single standard to the real actions of institutions and individuals, no matter how big their budgets or grand their power. What cannot withstand the light of day should not be suffered in silence.
If you see something, say something.
While some governments -- my own included -- attempt to impose an Orwellian Dark State of ubiquitous secret surveillance, secret wars, the rule of oligarchs, and public ignorance, the Edward Snowden leaks fanned the flames of the countering War on Ignorance that had been kept alive by civil libertarians. Only days after the U.S. Supreme Court denied review in a case where a reporter had been ordered to reveal his source of information for a book on the Dark State under the penalties for contempt of court (a long stretch in jail), a new web site is launched for communications between sources and journalists where the source's names never need to be revealed.
This article is part of the publicity for that new weapon fielded by the civil libertarian side in the War Against Ignorance.
Hurrah!
When the incoming emails stopped arriving, it seemed innocuous at first. But it would eventually become clear that this was no routine technical problem. Inside a row of gray office buildings in Brussels, a major hacking attack was in progress. And the perpetrators were British government spies.
It was in the summer of 2012 that the anomalies were initially detected by employees at Belgium’s largest telecommunications provider, Belgacom. But it wasn’t until a year later, in June 2013, that the company’s security experts were able to figure out what was going on. The computer systems of Belgacom had been infected with a highly sophisticated malware, and it was disguising itself as legitimate Microsoft software while quietly stealing data.
Last year, documents from National Security Agency whistleblower Edward Snowden confirmed that British surveillance agency Government Communications Headquarters was behind the attack, codenamed Operation Socialist. And in November, The Intercept revealed that the malware found on Belgacom’s systems was one of the most advanced spy tools ever identified by security researchers, who named it “Regin.”
The full story about GCHQ’s infiltration of Belgacom, however, has never been told. Key details about the attack have remained shrouded in mystery—and the scope of the attack unclear.
Now, in partnership with Dutch and Belgian newspapers NRC Handelsblad and De Standaard, The Intercept has pieced together the first full reconstruction of events that took place before, during, and after the secret GCHQ hacking operation.
Based on new documents from the Snowden archive and interviews with sources familiar with the malware investigation at Belgacom, The Intercept and its partners have established that the attack on Belgacom was more aggressive and far-reaching than previously thought. It occurred in stages between 2010 and 2011, each time penetrating deeper into Belgacom’s systems, eventually compromising the very core of the company’s networks.
When the incoming emails stopped arriving, it seemed innocuous at first. But it would eventually become clear that this was no routine technical problem. Inside a row of gray office buildings in Brussels, a major hacking attack was in progress. And the perpetrators were British government spies.
It was in the summer of 2012 that the anomalies were initially detected by employees at Belgium’s largest telecommunications provider, Belgacom. But it wasn’t until a year later, in June 2013, that the company’s security experts were able to figure out what was going on. The computer systems of Belgacom had been infected with a highly sophisticated malware, and it was disguising itself as legitimate Microsoft software while quietly stealing data.
Last year, documents from National Security Agency whistleblower Edward Snowden confirmed that British surveillance agency Government Communications Headquarters was behind the attack, codenamed Operation Socialist. And in November, The Intercept revealed that the malware found on Belgacom’s systems was one of the most advanced spy tools ever identified by security researchers, who named it “Regin.”
Snowden told The Intercept that the latest revelations amounted to unprecedented “smoking-gun attribution for a governmental cyber attack against critical infrastructure.”
The Belgacom hack, he said, is the “first documented example to show one EU member state mounting a cyber attack on another…a breathtaking example of the scale of the state-sponsored hacking problem.”
Publicly, Belgacom has played down the extent of the compromise, insisting that only its internal systems were breached and that customers’ data was never found to have been at risk. But secret GCHQ documents show the agency gained access far beyond Belgacom’s internal employee computers and was able to grab encrypted and unencrypted streams of private communications handled by the company.
Belgacom invested several million dollars in its efforts to clean-up its systems and beef-up its security after the attack. However, The Intercept has learned that sources familiar with the malware investigation at the company are uncomfortable with how the clean-up operation was handled—and they believe parts of the GCHQ malware were never fully removed.
The revelations about the scope of the hacking operation will likely alarm Belgacom’s customers across the world. The company operates a large number of data links internationally (see interactive map below), and it serves millions of people across Europe as well as officials from top institutions including the European Commission, the European Parliament, and the European Council. The new details will also be closely scrutinized by a federal prosecutor in Belgium, who is currently carrying out a criminal investigation into the attack on the company.
Sophia in ’t Veld, a Dutch politician who chaired the European Parliament’s recent inquiry into mass surveillance exposed by Snowden, told The Intercept that she believes the British government should face sanctions if the latest disclosures are proven.
What sets the secret British infiltration of Belgacom apart is that it was perpetrated against a close ally—and is backed up by a series of top-secret documents, which The Intercept is now publishing.
Between 2009 and 2011, GCHQ worked with its allies to develop sophisticated new tools and technologies it could use to scan global networks for weaknesses and then penetrate them. According to top-secret GCHQ documents, the agency wanted to adopt the aggressive new methods in part to counter the use of privacy-protecting encryption—what it described as the “encryption problem.”
When communications are sent across networks in encrypted format, it makes it much harder for the spies to intercept and make sense of emails, phone calls, text messages, internet chats, and browsing sessions. For GCHQ, there was a simple solution. The agency decided that, where possible, it would find ways to hack into communication networks to grab traffic before it’s encrypted.
The Snowden documents show that GCHQ wanted to gain access to Belgacom so that it could spy on phones used by surveillance targets travelling in Europe. But the agency also had an ulterior motive. Once it had hacked into Belgacom’s systems, GCHQ planned to break into data links connecting Belgacom and its international partners, monitoring communications transmitted between Europe and the rest of the world. A map in the GCHQ documents, named “Belgacom_connections,” highlights the company’s reach across Europe, the Middle East, and North Africa, illustrating why British spies deemed it of such high value.
Documents published with this article:
Automated NOC detection
Mobile Networks in My NOC World
Making network sense of the encryption problem
Stargate CNE requirements
NAC review – October to December 2011
GCHQ NAC review – January to March 2011
GCHQ NAC review – April to June 2011
GCHQ NAC review – July to September 2011
GCHQ NAC review – January to March 2012
GCHQ Hopscotch
Belgacom connections
SOPA, ACTA, the criminalization of sharing, and a myriad of other measures taken to perpetuate antiquated business models propping up enduring monopolies - all have become increasingly taxing on the tech community and informed citizens alike. When the storm clouds gather and torrential rain begins to fall, the people have managed to stave off the flood waters through collective effort and well organized activism - stopping, or at least delaying SOPA and ACTA.
However, is it really sustainable to mobilize each and every time multi-billion dollar corporations combine their resources and attempt to pass another series of draconian rules and regulations? Instead of manning the sandbags during each storm, wouldn't it suit us all better to transform the surrounding landscape in such a way as to harmlessly divert the floods, or better yet, harness them to our advantage?
In many ways the transformation has already begun.
While open source software and hardware, as well as innovative business models built around collaboration and crowd-sourcing have done much to build a paradigm independent of current centralized proprietary business models, large centralized corporations and the governments that do their bidding, still guard all the doors and carry all the keys. The Internet, the phone networks, radio waves, and satellite systems still remain firmly in the hands of big business. As long as they do, they retain the ability to not only reassert themselves in areas where gains have been made, but can impose preemptive measures to prevent any future progress.
With the advent of hackerspaces, increasingly we see projects that hold the potential of replacing, at least on a local level, much of the centralized infrastructure we take for granted until disasters or greed-driven rules and regulations upset the balance. It is with the further developing of our local infrastructure that we can leave behind the sandbags of perpetual activism and enjoy a permanently altered landscape that favors our peace and prosperity.
Decentralizing Telecom
As impressive as a hydroelectric dam may be and as overwhelming as it may seem as a project to undertake, it will always start with but a single shovelful of dirt. The work required becomes in its own way part of the payoff - with experienced gained and with a magnificent accomplishment to aspire toward.
In the same way, a communication network that runs parallel to existing networks, with global coverage, but locally controlled, may seem an impossible, overwhelming objective - and for one individual, or even a small group of individuals, it is. However, the paradigm has shifted. In the age of digital collaboration made possible by existing networks, the building of such a network can be done in parallel.
In an act of digital-judo, we can use the system's infrastructure as a means of supplanting and replacing it with something superior in both function and in form.
Does the National Security Agency (NSA) have the authority to collect and keep all encrypted Internet traffic for as long as is necessary to decrypt that traffic? That was a question first raised in June 2013, after the minimization procedures governing telephone and Internet records collected under Section 702 of the Foreign Intelligence Surveillance Act were disclosed by Edward Snowden. The issue quickly receded into the background, however, as the world struggled to keep up with the deluge of surveillance disclosures. The Intelligence Authorization Act of 2015, which passed Congress this last December, should bring the question back to the fore. It established retention guidelines for communications collected under Executive Order 12333 and included an exception that allows NSA to keep ‘incidentally’ collected encrypted communications for an indefinite period of time. This creates a massive loophole in the guidelines.
NSA’s retention of encrypted communications deserves further consideration today, now that these retention guidelines have been written into law. It has become increasingly clear over the last year that surveillance reform will be driven by technological change—specifically by the growing use of encryption technologies. Therefore, any legislation touching on encryption should receive close scrutiny.
Section 309 of the intel authorization bill describes “procedures for the retention of incidentally acquired communications.” It establishes retention guidelines for surveillance programs that are “reasonably anticipated to result in the acquisition of [telephone or electronic communications] to or from a United States person.” Communications to or from a United States person are ‘incidentally’ collected because the U.S. person is not the actual target of the collection. Section 309 states that these incidentally collected communications must be deleted after five years unless they meet a number of exceptions. One of these exceptions is that “the communication is enciphered or reasonably believed to have a secret meaning.”
This exception appears to be directly lifted from NSA’s minimization procedures for data collected under Section 702 of FISA, which were declassified in 2013.
While Section 309 specifically applies to collection taking place under E.O. 12333, not FISA, several of the exceptions described in Section 309 closely match exceptions in the FISA minimization procedures. That includes the exception for “enciphered” communications. Those minimization procedures almost certainly served as a model for these retention guidelines and will likely shape how this new language is interpreted by the Executive Branch. Section 309 also asks the heads of each relevant member of the intelligence community to develop procedures to ensure compliance with new retention requirements. I expect those procedures to look a lot like the FISA minimization guidelines.
This language is broad, circular, and technically incoherent, so it takes some effort to parse appropriately. When the minimization procedures were disclosed in 2013, this language was interpreted by outside commentators to mean that NSA may keep all encrypted data that has been incidentally collected under Section 702 for at least as long as is necessary to decrypt that data. Is this the correct interpretation? I think so.
It is important to realize that the language above isn’t just broad. It seems purposefully broad. The part regarding relevance seems to mirror the rationale NSA has used to justify its bulk phone records collection program. Under that program, all phone records were relevant because some of those records could be valuable to terrorism investigations and (allegedly) it isn’t possible to collect only those valuable records. This is the “to find a needle a haystack, you first have to have the haystack” argument. The same argument could be applied to encrypted data and might be at play here.
This exception doesn’t just apply to encrypted data that might be relevant to a current foreign intelligence investigation. It also applies to cases in which the encrypted data is likely to become relevant to a future intelligence requirement. This is some remarkably generous language. It seems one could justify keeping any type of encrypted data under this exception.
Upon close reading, it is difficult to avoid the conclusion that these procedures were written carefully to allow NSA to collect and keep a broad category of encrypted data under the rationale that this data might contain the communications of NSA targets and that it might be decrypted in the future. If NSA isn’t doing this today, then whoever wrote these minimization procedures wanted to at least ensure that NSA has the authority to do this tomorrow.
There are a few additional observations that are worth making regarding these nominally new retention guidelines and Section 702 collection. First, the concept of incidental collection as it has typically been used makes very little sense when applied to encrypted data.
The way that NSA’s Section 702 upstream “about” collection is understood to work is that technology installed on the network does some sort of pattern match on Internet traffic; say that an NSA target uses example@gmail.com to communicate. NSA would then search content of emails for references to example@gmail.com. This could notionally result in a lot of incidental collection of U.S. persons’ communications whenever the email that references example@gmail.com is somehow mixed together with emails that have nothing to do with the target.
This type of incidental collection isn’t possible when the data is encrypted because it won’t be possible to search and find example@gmail.com in the body of an email. Instead, example@gmail.com will have been turned into some alternative, indecipherable string of bits on the network. Incidental collection shouldn’t occur because the pattern match can’t occur in the first place. This demonstrates that, when communications are encrypted, it will be much harder for NSA to search Internet traffic for a unique ID associated with a specific target.
This lends further credence to the conclusion above: rather than doing targeted collection against specific individuals, NSA is collecting, or plans to collect, a broad class of data that is encrypted. For example, NSA might collect all PGP encrypted emails or all Tor traffic. In those cases, NSA could search Internet traffic for patterns associated with specific types of communications, rather than specific individuals’ communications. This would technically meet the definition of incidental collection because such activity would result in the collection of communications of U.S. persons who aren’t the actual targets of surveillance. Collection of all Tor traffic would entail a lot of this “incidental” collection because the communications of NSA targets would be mixed with the communications of a large number of non-target U.S. persons.
However, this “incidental” collection is inconsistent with how the term is typically used, which is to refer to over-collection resulting from targeted surveillance programs. If NSA were collecting all Tor traffic, that activity wouldn’t actually be targeted, and so any resulting over-collection wouldn’t actually be incidental. Moreover, greater use of encryption by the general public would result in an ever-growing amount of this type of incidental collection.
This type of collection would also be inconsistent with representations of Section 702 upstream collection that have been made to the public and to Congress. Intelligence officials have repeatedly suggested that search terms used as part of this program have a high degree of specificity. They have also argued that the program is an example of targeted rather than bulk collection. ODNI General Counsel Robert Litt, in a March 2014 meeting before the Privacy and Civil Liberties Oversight Board, stated that “there is either a misconception or a mischaracterization commonly repeated that Section 702 is a form of bulk collection. It is not bulk collection. It is targeted collection based on selectors such as telephone numbers or email addresses where there’s reason to believe that the selector is relevant to a foreign intelligence purpose.”
The collection of Internet traffic based on patterns associated with types of communications would be bulk collection; more akin to NSA’s collection of phone records en mass than it is to targeted collection focused on specific individuals. Moreover, this type of collection would certainly fall within the definition of bulk collection provided just last week by the National Academy of Sciences: “collection in which a significant portion of the retained data pertains to identifiers that are not targets at the time of collection.”
The Section 702 minimization procedures, which will serve as a template for any new retention guidelines established for E.O. 12333 collection, create a large loophole for encrypted communications. With everything from email to Internet browsing to real-time communications moving to encrypted formats, an ever-growing amount of Internet traffic will fall within this loophole.
Tucked into a budget authorization act in December without press notice. Section 309 (the Act is linked from the article) appears to be very broad authority for the NSA to intercept any form of telephone or other electronic information in bulk. There are far more exceptions from the five-year retention limitation than the encrypted information exception. When reading this, keep in mind that the U.S. intelligence community plays semantic games to obfuscate what it does. One of its word plays is that communications are not "collected" until an analyst looks at or listens to partiuclar data, even though the data will be searched to find information countless times before it becomes "collected." That searching was the major basis for a decision by the U.S. District Court in Washington, D.C. that bulk collection of telephone communications was unconstitutional: Under the Fourth Amendment, a "search" or "seizure" requiring a judicial warrant occurs no later than when the information is intercepted. That case is on appeal, has been briefed and argued, and a decision could come any time now. Similar cases are pending in two other courts of appeals.
Also, an important definition from the new Intelligence Authorization Act:
"(a) DEFINITIONS.-In this section:
(1) COVERED COMMUNICATION.-The term ''covered communication''
means any nonpublic telephone or electronic communication
acquired without the consent of a person who is a
party to the communication, including communications in electronic
storage."
YouTube has decided it's had enough of Adobe's perenially-p0wned Flash and will therefore now default to delivering video with the HTML5 <video> tag.
A post by the video vault's engineering and development team says the move is now possible, and sensible, because the industry has invented useful things like adaptive bitrates, encryption, new codecs and WebRTC that make the <video> usable work in the real world.
Those additions mean HTML5 is at least as functional – or more so – than Flash, and if YouTube detects you are running Chrome, IE 11, Safari 8 and beta versions of Firefox, it'll now deliver video using <video> and flush Flash.
YouTube's also decided to can what it calls the “'old style' of Flash
YouTube seems not to care a jot that its actions are inimical to Adobe, saying it's just doing what all the cool kids – Netflix, Apple, Microsoft and its competitor Vimeo – have already done.
Which is not to say that Flash is dead: those who don't run the browsers above will still get YouTube delivered by whatever technology works bes tin their environment. And that will often – perhaps too often* – be Flash. ®
Bootnote * Until they get p0wned, that is: Flash is so horridly buggy that Apple has just updated its plugin-blockers to foil versions of the product prior to 16.0.0.296 and 13.0.0.264.
Nearly 20 years after Congress passed the Electronic Freedom of Information Act Amendments (E-FOIA), only 40 percent of
agencies have followed the law's instruction for systematic posting of records released through FOIA in their electronic reading rooms, according to a new
FOIA Audit released today by the National Security Archive at www.nsarchive.org to mark Sunshine Week.
The Archive team audited all federal agencies with Chief FOIA Officers as well as agency components that handle more than 500 FOIA requests a year — 165
federal offices in all — and found only 67 with online libraries populated with significant numbers of released FOIA documents and regularly updated.
Congress called on agencies to embrace disclosure and the digital era nearly two decades ago, with the passage of the 1996 "E-FOIA" amendments. The law
mandated that agencies post key sets of records online, provide citizens with detailed guidance on making FOIA requests, and use new information technology
to post online proactively records of significant public interest, including those already processed in response to FOIA requests and "likely to become the
subject of subsequent requests."
Congress believed then, and openness advocates know now, that this kind of proactive disclosure, publishing online the results of FOIA requests as well as
agency records that might be requested in the future, is the only tenable solution to FOIA backlogs and delays. Thus the National Security
Archive chose to focus on the e-reading rooms of agencies in its latest audit.
Even though the majority of federal agencies have not yet embraced proactive disclosure of their FOIA releases, the Archive E-FOIA Audit did find that some
real "E-Stars" exist within the federal government, serving as examples to lagging agencies that technology can be harnessed to create state-of-the art
FOIA platforms. Unfortunately, our audit also found "E-Delinquents" whose abysmal web performance recalls the teletype era.
E-Delinquents include the Office of Science and Technology Policy at the White House, which, despite being mandated to advise the President on technology
policy, does not embrace 21st century practices by posting any frequently requested records online. Another E-Delinquent, the Drug Enforcement
Administration, insults its website's viewers by claiming that it "does not maintain records appropriate for FOIA Library at this time."
"The presumption of openness requires the presumption of posting," said Archive director Tom Blanton. "For the new generation, if it's not online, it does not exist."
The National Security Archive has conducted fourteen FOIA Audits since 2002. Modeled after the California Sunshine Survey and subsequent state "FOI
Audits," the Archive's FOIA Audits use open-government laws to test whether or not agencies are obeying those same laws. Recommendations from previous
Archive FOIA Audits have led directly to laws and executive orders which have: set explicit customer service guidelines, mandated FOIA backlog reduction,
assigned individualized FOIA tracking numbers, forced agencies to report the average number of days needed to process requests, and revealed the (often
embarrassing) ages of the oldest pending FOIA requests. The surveys include:
The federal government has made some progress moving into the digital era. The National Security Archive's last E-FOIA Audit in 2007, " File Not Found," reported that only one in five federal agencies had put online all of the
specific requirements mentioned in the E-FOIA amendments, such as guidance on making requests, contact information, and processing regulations. The new
E-FOIA Audit finds the number of agencies that have checked those boxes is now much higher — 100 out of 165 — though many (66 in 165) have posted just the
bare minimum, especially when posting FOIA responses. An additional 33 agencies even now do not post these types of records at all, clearly thwarting the
law's intent.
The FOIAonline Members
(Department of Commerce, Environmental Protection Agency, Federal Labor Relations Authority, Merit Systems Protection Board, National Archives and Records
Administration, Pension Benefit Guaranty Corporation, Department of the Navy, General Services Administration, Small Business Administration, U.S.
Citizenship and Immigration Services, and Federal Communications Commission) won their "E-Star" by making past requests and releases searchable via
FOIAonline. FOIAonline also allows users to submit their FOIA requests digitally.
Disabilities Compliance.
Despite the E-FOIA Act, many government agencies do not embrace the idea of posting their FOIA responses online. The most common reason agencies give is that it
is difficult to post documents in a format that complies with the Americans with Disabilities Act, also referred to as being "508 compliant," and the 1998
Amendments to the Rehabilitation Act that require federal agencies "to make their electronic and information technology (EIT) accessible to people with
disabilities."
E-Star agencies, however, have proven that 508 compliance is no barrier when the agency has a will to post. All documents posted on FOIAonline are 508
compliant, as are the documents posted by the Department of Defense and the Department of State. In fact, every document created electronically by the US
government after 1998 should already be 508 compliant. Even old paper records that are scanned to be processed through FOIA can be made 508 compliant with
just a few clicks in Adobe Acrobat, according to
this Department of Homeland Security guide
(essentially OCRing the text, and including information about where non-textual fields appear). Even if agencies are insistent it is too difficult to OCR
older documents that were scanned from paper, they cannot use that excuse with digital records.
Key Findings
Excuses Agencies Give for Poor E-Performance
Justice Department guidance undermines the statute.
Currently, the FOIA stipulates that documents "likely to become the subject of subsequent requests" must be posted by agencies somewhere in their
electronic reading rooms. The Department of Justice's Office of Information Policy defines these records as "frequently requested records… or those which have been
released three or more times to FOIA requesters." Of course, it is time-consuming for agencies to develop a system that keeps track of how often a record
has been released, which is in part why agencies rarely do so and are often in breach of the law. Troublingly, both the current House and Senate FOIA bills include language that codifies the instructions from the Department of
Justice.
The National Security Archive believes the addition of this "three or more times" language actually harms the intent of the Freedom of Information Act as
it will give agencies an easy excuse ("not requested three times yet!") not to proactively post documents that agency FOIA offices have already spent time,
money, and energy processing. We have formally suggested alternate language requiring that agencies generally post "all records, regardless of form or
format that have been released in response to a FOIA request."
THE E-DELINQUENTS: WORST OVERALL AGENCIES
In alphabetical order
Privacy.
Another commonly articulated concern about posting FOIA releases online is that doing so could inadvertently disclose private information from "first
person" FOIA requests. This is a valid concern, and this subset of FOIA requests should not be posted online. (The Justice Department identified "first party" requester rights in 1989. Essentially agencies cannot use
the b(6) privacy exemption to redact information if a person requests it for him or herself. An example of a "first person" FOIA would be a person's
request for his own immigration file.)
Cost and Waste of Resources.
There is also a belief that there is little public interest in the majority of FOIA requests processed, and hence it is a waste of resources to post them.
This thinking runs counter to the governing principle of the Freedom of Information Act: that government information belongs to US citizens, not US
agencies. As such, the reason that a person requests information is immaterial as the agency processes the request; the "interest factor" of a document
should also be immaterial when an agency is required to post it online.
Some think that posting FOIA releases online is not cost effective. In fact, the opposite is true. It's not cost effective to spend tens (or
hundreds) of person hours to search for, review, and redact FOIA requests only to mail it to the requester and have them slip it into their desk drawer and
forget about it. That is a waste of resources. The released document should be posted online for any interested party to utilize. This will only become
easier as FOIA processing systems evolve to automatically post the documents they track. The State Department earned its "E-Star" status demonstrating this
very principle, and spent no new funds and did not hire contractors to build its Electronic Reading Room, instead it built a self-sustaining platform that
will save the agency time and money going forward.
Forcibly taking down websites deemed to be supportive of terrorism, or criminalizing speech deemed to “advocate” terrorism, is a major trend in both Europe and the West generally. Last month in Brussels, the European Union’s counter-terrorism coordinator issued a memo proclaiming that “Europe is facing an unprecedented, diverse and serious terrorist threat,” and argued that increased state control over the Internet is crucial to combating it.
The memo noted that “the EU and its Member States have developed several initiatives related to countering radicalisation and terrorism on the Internet,” yet argued that more must be done. It argued that the focus should be on “working with the main players in the Internet industry [a]s the best way to limit the circulation of terrorist material online.” It specifically hailed the tactics of the U.K. Counter-Terrorism Internet Referral Unit (CTIRU), which has succeeded in causing the removal of large amounts of material it deems “extremist”:
In addition to recommending the dissemination of “counter-narratives” by governments, the memo also urged EU member states to “examine the legal and technical possibilities to remove illegal content.”
Exploiting terrorism fears to control speech has been a common practice in the West since 9/11, but it is becoming increasingly popular even in countries that have experienced exceedingly few attacks. A new extremist bill advocated by the right-wing Harper government in Canada (also supported by Liberal Party leader Justin Trudeau even as he recognizes its dangers) would create new crimes for “advocating terrorism”; specifically: “every person who, by communicating statements, knowingly advocates or promotes the commission of terrorism offences in general” would be a guilty and can be sent to prison for five years for each offense.
In justifying the new proposal, the Canadian government admits that “under the current criminal law, it is [already] a crime to counsel or actively encourage others to commit a specific terrorism offence.” This new proposal is about criminalizing ideas and opinions. In the government’s words, it “prohibits the intentional advocacy or promotion of terrorism, knowing or reckless as to whether it would result in terrorism.”
If someone argues that continuous Western violence and interference in the Muslim world for decades justifies violence being returned to the West, or even advocates that governments arm various insurgents considered by some to be “terrorists,” such speech could easily be viewed as constituting a crime.
To calm concerns, Canadian authorities point out that “the proposed new offence is similar to one recently enacted by Australia, that prohibits advocating a terrorist act or the commission of a terrorism offence-all while being reckless as to whether another person will engage in this kind of activity.” Indeed, Australia enacted a new law late last year that indisputably targets political speech and ideas, as well as criminalizing journalism considered threatening by the government.
Punishing people for their speech deemed extremist or dangerous has been a vibrant practice in both the U.K. and U.S. for some time now, as I detailed (coincidentally) just a couple days before free speech marches broke out in the West after the Charlie Hebdo attacks. Those criminalization-of-speech attacks overwhelmingly target Muslims, and have resulted in the punishment of such classic free speech activities as posting anti-war commentary on Facebook, tweeting links to “extremist” videos, translating and posting “radicalizing” videos to the Internet, writing scholarly articles in defense of Palestinian groups and expressing harsh criticism of Israel, and even including a Hezbollah channel in a cable package.
Beyond the technical issues, trying to legislate ideas out of existence is a fool’s game: those sufficiently determined will always find ways to make themselves heard. Indeed, as U.S. pop star Barbra Streisand famously learned, attempts to suppress ideas usually result in the greatest publicity possible for their advocates and/or elevate them by turning fringe ideas into martyrs for free speech (I have zero doubt that all five of the targeted sites enjoyed among their highest traffic dates ever today as a result of the French targeting).
But the comical futility of these efforts is exceeded by their profound dangers. Who wants governments to be able to unilaterally block websites? Isn’t the exercise of this website-blocking power what has long been cited as reasons we should regard the Bad Countries — such as China and Iran — as tyrannies (which also usually cite “counterterrorism” to justify their censorship efforts)?
s those and countless other examples prove, the concepts of “extremism” and “radicalizing” (like “terrorism” itself) are incredibly vague and elastic, and in the hands of those who wield power, almost always expand far beyond what you think it should mean (plotting to blow up innocent people) to mean: anyone who disseminates ideas that are threatening to the exercise of our power. That’s why powers justified in the name of combating “radicalism” or “extremism” are invariably — not often or usually, but invariably — applied to activists, dissidents, protesters and those who challenge prevailing orthodoxies and power centers.
My arguments for distrusting governments to exercise powers of censorship are set forth here (in the context of a prior attempt by a different French minister to control the content of Twitter). In sum, far more damage has been inflicted historically by efforts to censor and criminalize political ideas than by the kind of “terrorism” these governments are invoking to justify these censorship powers.
And whatever else may be true, few things are more inimical to, or threatening of, Internet freedom than allowing functionaries inside governments to unilaterally block websites from functioning on the ground that the ideas those sites advocate are objectionable or “dangerous.” That’s every bit as true when the censors are in Paris, London, and Ottawa, and Washington as when they are in Tehran, Moscow or Beijing.
All the remaining Snowden documents will be released next month, according to whistle-blowing site Cryptome, which said in a tweet that the release of the info by unnamed third parties would be necessary to head off an unnamed "war".Cryptome said it would "aid and abet" the release of "57K to 1.7M" new documents that had been "withheld for national security-public debate [sic]".
<a href="http://pubads.g.doubleclick.net/gampad/jump?iu=/6978/reg_security/front&amp;sz=300x250%7C300x600&amp;tile=3&amp;c=33U7RchawQrMoAAHIac14AAAKH&amp;t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" target="_blank">
<img src="http://pubads.g.doubleclick.net/gampad/ad?iu=/6978/reg_security/front&amp;sz=300x250%7C300x600&amp;tile=3&amp;c=33U7RchawQrMoAAHIac14AAAKH&amp;t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" alt=""></a>
The site clarified that will not be publishing the documents itself.Transparency activists would welcome such a release but such a move would be heavily criticised by inteligence agencies and military officials, who argue that Snowden's dump of secret documents has set US and allied (especially British) intelligence efforts back by years.
As things stand, the flow of Snowden disclosures is controlled by those who have access to the Snowden archive, which might possibly include Snowden confidants such as Glenn Greenwald and Laura Poitras. In some cases, even when these people release information to mainstream media organisations, it is then suppressed by these organisations after negotiation with the authorities. (In one such case, some key facts were later revealed by the Register.)"July is when war begins unless headed off by Snowden full release of crippling intel. After war begins not a chance of release," Cryptome tweeted on its official feed."Warmongerers are on a rampage. So, yes, citizens holding Snowden docs will do the right thing," it said.
"For more on Snowden docs release in July watch for Ellsberg, special guest and others at HOPE, July 18-20: http://www.hope.net/schedule.html," it added.HOPE (Hackers On Planet Earth) is a well-regarded and long-running hacking conference organised by 2600 magazine. Previous speakers at the event have included Kevin Mitnick, Steve Wozniak and Jello Biafra.In other developments, Cryptome has started a Kickstarter fund to release its entire archive in the form of a USB stick archive. It wants to raise $100,000 to help it achieve its goal. More than $14,000 has already been raised.The funding drive follows a dispute between Cryptome and its host Network Solutions, which is owned by web.com. Access to the site was blocked following a malware infection last week. Cryptome founder John Young criticised the host, claiming it had over-reacted and had been slow to restore access to the site, which Cryptome criticised as a form of censorship.In response, Cryptome plans to more widely distribute its content across multiple sites as well as releasing the planned USB stick archive. ®
If we’ve learned one thing from the Snowden revelations, it’s that what can be spied on will be spied on. Since the advent of what used to be known as the World Wide Web, it has been a relatively simple matter for network attackers—whether it’s the NSA, Chinese intelligence, your employer, your university, abusive partners, or teenage hackers on the same public WiFi as you—to spy on almost everything you do online.
HTTPS, the technology that encrypts traffic between browsers and websites, fixes this problem—anyone listening in on that stream of data between you and, say, your Gmail window or bank’s web site would get nothing but useless random characters—but is woefully under-used. The ambitious new non-profit Let’s Encrypt aims to make the process of deploying HTTPS not only fast, simple, and free, but completely automatic. If it succeeds, the project will render vast regions of the internet invisible to prying eyes.
The benefits of using HTTPS are obvious when you think about protecting secret information you send over the internet, like passwords and credit card numbers. It also helps protect information like what you search for in Google, what articles you read, what prescription medicine you take, and messages you send to colleagues, friends, and family from being monitored by hackers or authorities.
But there are less obvious benefits as well. Websites that don’t use HTTPS are vulnerable to “session hijacking,” where attackers can take over your account even if they don’t know your password. When you download software without encryption, sophisticated attackers can secretly replace the download with malware that hacks your computer as soon as you try installing it.
Encryption also prevents attackers from tampering with or impersonating legitimate websites. For example, the Chinese government censors specific pages on Wikipedia, the FBI impersonated The Seattle Times to get a suspect to click on a malicious link, and Verizon and AT&T injected tracking tokens into mobile traffic without user consent. HTTPS goes a long way in preventing these sorts of attacks.
And of course there’s the NSA, which relies on the limited adoption of HTTPS to continue to spy on the entire internet with impunity. If companies want to do one thing to meaningfully protect their customers from surveillance, it should be enabling encryption on their websites by default.
Let’s Encrypt, which was announced this week but won’t be ready to use until the second quarter of 2015, describes itself as “a free, automated, and open certificate authority (CA), run for the public’s benefit.” It’s the product of years of work from engineers at Mozilla, Cisco, Akamai, Electronic Frontier Foundation, IdenTrust, and researchers at the University of Michigan. (Disclosure: I used to work for the Electronic Frontier Foundation, and I was aware of Let’s Encrypt while it was being developed.)
If Let’s Encrypt works as advertised, deploying HTTPS correctly and using all of the best practices will be one of the simplest parts of running a website. All it will take is running a command. Currently, HTTPS requires jumping through a variety of complicated hoops that certificate authorities insist on in order prove ownership of domain names. Let’s Encrypt automates this task in seconds, without requiring any human intervention, and at no cost.
The transition to a fully encrypted web won’t be immediate. After Let’s Encrypt is available to the public in 2015, each website will have to actually use it to switch over. And major web hosting companies also need to hop on board for their customers to be able to take advantage of it. If hosting companies start work now to integrate Let’s Encrypt into their services, they could offer HTTPS hosting by default at no extra cost to all their customers by the time it launches.