Skip to main content

Home/ Open Web/ Group items tagged reference

Rss Feed Group items tagged

Paul Merrell

This project aims to make '404 not found' pages a thing of the past - 0 views

  • The Internet is always changing. Sites are rising and falling, content is deleted, and bad URLs can lead to '404 Not Found' errors that are as helpful as a brick wall. A new project proposes an do away with dead 404 errors by implementing new HTML code that will help access prior versions of hyperlinked content. With any luck, that means that you’ll never have to run into a dead link again. The “404-No-More” project is backed by a formidable coalition including members from organizations like the Harvard Library Innovation Lab, Los Alamos National Laboratory, Old Dominion University, and the Berkman Center for Internet & Society. Part of the Knight News Challenge, which seeks to strengthen the Internet for free expression and innovation through a variety of initiatives, 404-No-More recently reached the semifinal stage. The project aims to cure so-called link rot, the process by which hyperlinks become useless overtime because they point to addresses that are no longer available. If implemented, websites such as Wikipedia and other reference documents would be vastly improved. The new feature would also give Web authors a way provide links that contain both archived copies of content and specific dates of reference, the sort of information that diligent readers have to hunt down on a website like Archive.org.
  • While it may sound trivial, link rot can actually have real ramifications. Nearly 50 percent of the hyperlinks in Supreme Court decisions no longer work, a 2013 study revealed. Losing footnotes and citations in landmark legal decisions can mean losing crucial information and context about the laws that govern us. The same study found that 70 percent of URLs within the Harvard Law Review and similar journals didn’t link to the originally cited information, considered a serious loss surrounding the discussion of our laws. The project’s proponents have come up with more potential uses as well. Activists fighting censorship will have an easier time combatting government takedowns, for instance. Journalists will be much more capable of researching dynamic Web pages. “If every hyperlink was annotated with a publication date, you could automatically view an archived version of the content as the author intended for you to see it,” the project’s authors explain. The ephemeral nature of the Web could no longer be used as a weapon. Roger Macdonald, a director at the Internet Archive, called the 404-No-More project “an important contribution to preservation of knowledge.”
  • The new feature would come in the form of introducing the mset attribute to the <a> element in HTML, which would allow users of the code to specify multiple dates and copies of content as an external resource. For instance, if both the date of reference and the location of a copy of targeted content is known by an author, the new code would like like this: The 404-No-More project’s goals are numerous, but the ultimate goal is to have mset become a new HTML standard for hyperlinks. “An HTML standard that incorporates archives for hyperlinks will loop in these efforts and make the Web better for everyone,” project leaders wrote, “activists, journalists, and regular ol’ everyday web users.”
Paul Merrell

Internet Giants Erect Barriers to Spy Agencies - NYTimes.com - 0 views

  • As fast as it can, Google is sealing up cracks in its systems that Edward J. Snowden revealed the N.S.A. had brilliantly exploited. It is encrypting more data as it moves among its servers and helping customers encode their own emails. Facebook, Microsoft and Yahoo are taking similar steps.
  • After years of cooperating with the government, the immediate goal now is to thwart Washington — as well as Beijing and Moscow. The strategy is also intended to preserve business overseas in places like Brazil and Germany that have threatened to entrust data only to local providers. Google, for example, is laying its own fiber optic cable under the world’s oceans, a project that began as an effort to cut costs and extend its influence, but now has an added purpose: to assure that the company will have more control over the movement of its customer data.
  • A year after Mr. Snowden’s revelations, the era of quiet cooperation is over. Telecommunications companies say they are denying requests to volunteer data not covered by existing law. A.T.&T., Verizon and others say that compared with a year ago, they are far more reluctant to cooperate with the United States government in “gray areas” where there is no explicit requirement for a legal warrant.
  • ...8 more annotations...
  • Eric Grosse, Google’s security chief, suggested in an interview that the N.S.A.'s own behavior invited the new arms race.“I am willing to help on the purely defensive side of things,” he said, referring to Washington’s efforts to enlist Silicon Valley in cybersecurity efforts. “But signals intercept is totally off the table,” he said, referring to national intelligence gathering.“No hard feelings, but my job is to make their job hard,” he added.
  • In Washington, officials acknowledge that covert programs are now far harder to execute because American technology companies, fearful of losing international business, are hardening their networks and saying no to requests for the kind of help they once quietly provided.Continue reading the main story Robert S. Litt, the general counsel of the Office of the Director of National Intelligence, which oversees all 17 American spy agencies, said on Wednesday that it was “an unquestionable loss for our nation that companies are losing the willingness to cooperate legally and voluntarily” with American spy agencies.
  • Many point to an episode in 2012, when Russian security researchers uncovered a state espionage tool, Flame, on Iranian computers. Flame, like the Stuxnet worm, is believed to have been produced at least in part by American intelligence agencies. It was created by exploiting a previously unknown flaw in Microsoft’s operating systems. Companies argue that others could have later taken advantage of this defect.Worried that such an episode undercuts confidence in its wares, Microsoft is now fully encrypting all its products, including Hotmail and Outlook.com, by the end of this year with 2,048-bit encryption, a stronger protection that would take a government far longer to crack. The software is protected by encryption both when it is in data centers and when data is being sent over the Internet, said Bradford L. Smith, the company’s general counsel.
  • Mr. Smith also said the company was setting up “transparency centers” abroad so that technical experts of foreign governments could come in and inspect Microsoft’s proprietary source code. That will allow foreign governments to check to make sure there are no “back doors” that would permit snooping by United States intelligence agencies. The first such center is being set up in Brussels.Microsoft has also pushed back harder in court. In a Seattle case, the government issued a “national security letter” to compel Microsoft to turn over data about a customer, along with a gag order to prevent Microsoft from telling the customer it had been compelled to provide its communications to government officials. Microsoft challenged the gag order as violating the First Amendment. The government backed down.
  • Hardware firms like Cisco, which makes routers and switches, have found their products a frequent subject of Mr. Snowden’s disclosures, and their business has declined steadily in places like Asia, Brazil and Europe over the last year. The company is still struggling to convince foreign customers that their networks are safe from hackers — and free of “back doors” installed by the N.S.A. The frustration, companies here say, is that it is nearly impossible to prove that their systems are N.S.A.-proof.
  • In one slide from the disclosures, N.S.A. analysts pointed to a sweet spot inside Google’s data centers, where they could catch traffic in unencrypted form. Next to a quickly drawn smiley face, an N.S.A. analyst, referring to an acronym for a common layer of protection, had noted, “SSL added and removed here!”
  • Facebook and Yahoo have also been encrypting traffic among their internal servers. And Facebook, Google and Microsoft have been moving to more strongly encrypt consumer traffic with so-called Perfect Forward Secrecy, specifically devised to make it more labor intensive for the N.S.A. or anyone to read stored encrypted communications.One of the biggest indirect consequences from the Snowden revelations, technology executives say, has been the surge in demands from foreign governments that saw what kind of access to user information the N.S.A. received — voluntarily or surreptitiously. Now they want the same.
  • The latest move in the war between intelligence agencies and technology companies arrived this week, in the form of a new Google encryption tool. The company released a user-friendly, email encryption method to replace the clunky and often mistake-prone encryption schemes the N.S.A. has readily exploited.But the best part of the tool was buried in Google’s code, which included a jab at the N.S.A.'s smiley-face slide. The code included the phrase: “ssl-added-and-removed-here-; - )”
Paul Merrell

Closing CDF WG, Publishing Specs as Notes from Doug Schepers on 2010-07-12 (public-cdf@... - 0 views

  • Hi, CDF folks- While we had hoped that more implementations might emerge that passed the CDF and WICD test suites [1], such that these specifications would meet the criteria as W3C Recommendations, it does not seem that this will happen in a reasonable timeframe. Despite good partial implementation experience, implementers have not show sufficient interest to justify further investment of W3C resources into this group, even at a background level. In order to clarify the status of the CDF WG specifications, including Compound Document by Reference Framework 1.0 [2], Web Integration Compound Document (WICD) Core 1.0 [3], WICD Mobile 1.0 [4], and WICD Full 1.0 [5], all in Candidate Recommendation phase since July 2007, we have decided to publish them as Working Group Notes instead, and to close the Compound Document Formats Working Group.
  •  
    This event speaks loudly to how little interest browser developershave in interoperable web solutions. One-way compatibility wins and the ability of web applications to round-trip data loses. For those that did not realize it, the Compound Document by Reference Framework not only allowes but requires that more featureful implementations round-trip the output of less featureful implementations without data loss. See http://www.w3.org/TR/2007/CR-CDR-20070718/#conformance ("A conformant user agent of a superset profile specification must process subset profile content as if it were the superset profile content"). 
Paul Merrell

The Latest Rules on How Long NSA Can Keep Americans' Encrypted Data Look Too Familiar |... - 0 views

  • Does the National Security Agency (NSA) have the authority to collect and keep all encrypted Internet traffic for as long as is necessary to decrypt that traffic? That was a question first raised in June 2013, after the minimization procedures governing telephone and Internet records collected under Section 702 of the Foreign Intelligence Surveillance Act were disclosed by Edward Snowden. The issue quickly receded into the background, however, as the world struggled to keep up with the deluge of surveillance disclosures. The Intelligence Authorization Act of 2015, which passed Congress this last December, should bring the question back to the fore. It established retention guidelines for communications collected under Executive Order 12333 and included an exception that allows NSA to keep ‘incidentally’ collected encrypted communications for an indefinite period of time. This creates a massive loophole in the guidelines. NSA’s retention of encrypted communications deserves further consideration today, now that these retention guidelines have been written into law. It has become increasingly clear over the last year that surveillance reform will be driven by technological change—specifically by the growing use of encryption technologies. Therefore, any legislation touching on encryption should receive close scrutiny.
  • Section 309 of the intel authorization bill describes “procedures for the retention of incidentally acquired communications.” It establishes retention guidelines for surveillance programs that are “reasonably anticipated to result in the acquisition of [telephone or electronic communications] to or from a United States person.” Communications to or from a United States person are ‘incidentally’ collected because the U.S. person is not the actual target of the collection. Section 309 states that these incidentally collected communications must be deleted after five years unless they meet a number of exceptions. One of these exceptions is that “the communication is enciphered or reasonably believed to have a secret meaning.” This exception appears to be directly lifted from NSA’s minimization procedures for data collected under Section 702 of FISA, which were declassified in 2013. 
  • While Section 309 specifically applies to collection taking place under E.O. 12333, not FISA, several of the exceptions described in Section 309 closely match exceptions in the FISA minimization procedures. That includes the exception for “enciphered” communications. Those minimization procedures almost certainly served as a model for these retention guidelines and will likely shape how this new language is interpreted by the Executive Branch. Section 309 also asks the heads of each relevant member of the intelligence community to develop procedures to ensure compliance with new retention requirements. I expect those procedures to look a lot like the FISA minimization guidelines.
  • ...6 more annotations...
  • This language is broad, circular, and technically incoherent, so it takes some effort to parse appropriately. When the minimization procedures were disclosed in 2013, this language was interpreted by outside commentators to mean that NSA may keep all encrypted data that has been incidentally collected under Section 702 for at least as long as is necessary to decrypt that data. Is this the correct interpretation? I think so. It is important to realize that the language above isn’t just broad. It seems purposefully broad. The part regarding relevance seems to mirror the rationale NSA has used to justify its bulk phone records collection program. Under that program, all phone records were relevant because some of those records could be valuable to terrorism investigations and (allegedly) it isn’t possible to collect only those valuable records. This is the “to find a needle a haystack, you first have to have the haystack” argument. The same argument could be applied to encrypted data and might be at play here.
  • This exception doesn’t just apply to encrypted data that might be relevant to a current foreign intelligence investigation. It also applies to cases in which the encrypted data is likely to become relevant to a future intelligence requirement. This is some remarkably generous language. It seems one could justify keeping any type of encrypted data under this exception. Upon close reading, it is difficult to avoid the conclusion that these procedures were written carefully to allow NSA to collect and keep a broad category of encrypted data under the rationale that this data might contain the communications of NSA targets and that it might be decrypted in the future. If NSA isn’t doing this today, then whoever wrote these minimization procedures wanted to at least ensure that NSA has the authority to do this tomorrow.
  • There are a few additional observations that are worth making regarding these nominally new retention guidelines and Section 702 collection. First, the concept of incidental collection as it has typically been used makes very little sense when applied to encrypted data. The way that NSA’s Section 702 upstream “about” collection is understood to work is that technology installed on the network does some sort of pattern match on Internet traffic; say that an NSA target uses example@gmail.com to communicate. NSA would then search content of emails for references to example@gmail.com. This could notionally result in a lot of incidental collection of U.S. persons’ communications whenever the email that references example@gmail.com is somehow mixed together with emails that have nothing to do with the target. This type of incidental collection isn’t possible when the data is encrypted because it won’t be possible to search and find example@gmail.com in the body of an email. Instead, example@gmail.com will have been turned into some alternative, indecipherable string of bits on the network. Incidental collection shouldn’t occur because the pattern match can’t occur in the first place. This demonstrates that, when communications are encrypted, it will be much harder for NSA to search Internet traffic for a unique ID associated with a specific target.
  • This lends further credence to the conclusion above: rather than doing targeted collection against specific individuals, NSA is collecting, or plans to collect, a broad class of data that is encrypted. For example, NSA might collect all PGP encrypted emails or all Tor traffic. In those cases, NSA could search Internet traffic for patterns associated with specific types of communications, rather than specific individuals’ communications. This would technically meet the definition of incidental collection because such activity would result in the collection of communications of U.S. persons who aren’t the actual targets of surveillance. Collection of all Tor traffic would entail a lot of this “incidental” collection because the communications of NSA targets would be mixed with the communications of a large number of non-target U.S. persons. However, this “incidental” collection is inconsistent with how the term is typically used, which is to refer to over-collection resulting from targeted surveillance programs. If NSA were collecting all Tor traffic, that activity wouldn’t actually be targeted, and so any resulting over-collection wouldn’t actually be incidental. Moreover, greater use of encryption by the general public would result in an ever-growing amount of this type of incidental collection.
  • This type of collection would also be inconsistent with representations of Section 702 upstream collection that have been made to the public and to Congress. Intelligence officials have repeatedly suggested that search terms used as part of this program have a high degree of specificity. They have also argued that the program is an example of targeted rather than bulk collection. ODNI General Counsel Robert Litt, in a March 2014 meeting before the Privacy and Civil Liberties Oversight Board, stated that “there is either a misconception or a mischaracterization commonly repeated that Section 702 is a form of bulk collection. It is not bulk collection. It is targeted collection based on selectors such as telephone numbers or email addresses where there’s reason to believe that the selector is relevant to a foreign intelligence purpose.” The collection of Internet traffic based on patterns associated with types of communications would be bulk collection; more akin to NSA’s collection of phone records en mass than it is to targeted collection focused on specific individuals. Moreover, this type of collection would certainly fall within the definition of bulk collection provided just last week by the National Academy of Sciences: “collection in which a significant portion of the retained data pertains to identifiers that are not targets at the time of collection.”
  • The Section 702 minimization procedures, which will serve as a template for any new retention guidelines established for E.O. 12333 collection, create a large loophole for encrypted communications. With everything from email to Internet browsing to real-time communications moving to encrypted formats, an ever-growing amount of Internet traffic will fall within this loophole.
  •  
    Tucked into a budget authorization act in December without press notice. Section 309 (the Act is linked from the article) appears to be very broad authority for the NSA to intercept any form of telephone or other electronic information in bulk. There are far more exceptions from the five-year retention limitation than the encrypted information exception. When reading this, keep in mind that the U.S. intelligence community plays semantic games to obfuscate what it does. One of its word plays is that communications are not "collected" until an analyst looks at or listens to partiuclar data, even though the data will be searched to find information countless times before it becomes "collected." That searching was the major basis for a decision by the U.S. District Court in Washington, D.C. that bulk collection of telephone communications was unconstitutional: Under the Fourth Amendment, a "search" or "seizure" requiring a judicial warrant occurs no later than when the information is intercepted. That case is on appeal, has been briefed and argued, and a decision could come any time now. Similar cases are pending in two other courts of appeals. Also, an important definition from the new Intelligence Authorization Act: "(a) DEFINITIONS.-In this section: (1) COVERED COMMUNICATION.-The term ''covered communication'' means any nonpublic telephone or electronic communication acquired without the consent of a person who is a party to the communication, including communications in electronic storage."       
Paul Merrell

Canadian Spies Collect Domestic Emails in Secret Security Sweep - The Intercept - 0 views

  • Canada’s electronic surveillance agency is covertly monitoring vast amounts of Canadians’ emails as part of a sweeping domestic cybersecurity operation, according to top-secret documents. The surveillance initiative, revealed Wednesday by CBC News in collaboration with The Intercept, is sifting through millions of emails sent to Canadian government agencies and departments, archiving details about them on a database for months or even years. The data mining operation is carried out by the Communications Security Establishment, or CSE, Canada’s equivalent of the National Security Agency. Its existence is disclosed in documents obtained by The Intercept from NSA whistleblower Edward Snowden. The emails are vacuumed up by the Canadian agency as part of its mandate to defend against hacking attacks and malware targeting government computers. It relies on a system codenamed PONY EXPRESS to analyze the messages in a bid to detect potential cyber threats.
  • Last year, CSE acknowledged it collected some private communications as part of cybersecurity efforts. But it refused to divulge the number of communications being stored or to explain for how long any intercepted messages would be retained. Now, the Snowden documents shine a light for the first time on the huge scope of the operation — exposing the controversial details the government withheld from the public. Under Canada’s criminal code, CSE is not allowed to eavesdrop on Canadians’ communications. But the agency can be granted special ministerial exemptions if its efforts are linked to protecting government infrastructure — a loophole that the Snowden documents show is being used to monitor the emails. The latest revelations will trigger concerns about how Canadians’ private correspondence with government employees are being archived by the spy agency and potentially shared with police or allied surveillance agencies overseas, such as the NSA. Members of the public routinely communicate with government employees when, for instance, filing tax returns, writing a letter to a member of parliament, applying for employment insurance benefits or submitting a passport application.
  • Chris Parsons, an internet security expert with the Toronto-based internet think tank Citizen Lab, told CBC News that “you should be able to communicate with your government without the fear that what you say … could come back to haunt you in unexpected ways.” Parsons said that there are legitimate cybersecurity purposes for the agency to keep tabs on communications with the government, but he added: “When we collect huge volumes, it’s not just used to track bad guys. It goes into data stores for years or months at a time and then it can be used at any point in the future.” In a top-secret CSE document on the security operation, dated from 2010, the agency says it “processes 400,000 emails per day” and admits that it is suffering from “information overload” because it is scooping up “too much data.” The document outlines how CSE built a system to handle a massive 400 terabytes of data from Internet networks each month — including Canadians’ emails — as part of the cyber operation. (A single terabyte of data can hold about a billion pages of text, or about 250,000 average-sized mp3 files.)
  • ...1 more annotation...
  • The agency notes in the document that it is storing large amounts of “passively tapped network traffic” for “days to months,” encompassing the contents of emails, attachments and other online activity. It adds that it stores some kinds of metadata — data showing who has contacted whom and when, but not the content of the message — for “months to years.” The document says that CSE has “excellent access to full take data” as part of its cyber operations and is receiving policy support on “use of intercepted private communications.” The term “full take” is surveillance-agency jargon that refers to the bulk collection of both content and metadata from Internet traffic. Another top-secret document on the surveillance dated from 2010 suggests the agency may be obtaining at least some of the data by covertly mining it directly from Canadian Internet cables. CSE notes in the document that it is “processing emails off the wire.”
  •  
    " CANADIAN SPIES COLLECT DOMESTIC EMAILS IN SECRET SECURITY SWEEP BY RYAN GALLAGHER AND GLENN GREENWALD @rj_gallagher@ggreenwald YESTERDAY AT 2:02 AM SHARE TWITTER FACEBOOK GOOGLE EMAIL PRINT POPULAR EXCLUSIVE: TSA ISSUES SECRET WARNING ON 'CATASTROPHIC' THREAT TO AVIATION CHICAGO'S "BLACK SITE" DETAINEES SPEAK OUT WHY DOES THE FBI HAVE TO MANUFACTURE ITS OWN PLOTS IF TERRORISM AND ISIS ARE SUCH GRAVE THREATS? NET NEUTRALITY IS HERE - THANKS TO AN UNPRECEDENTED GUERRILLA ACTIVISM CAMPAIGN HOW SPIES STOLE THE KEYS TO THE ENCRYPTION CASTLE Canada's electronic surveillance agency is covertly monitoring vast amounts of Canadians' emails as part of a sweeping domestic cybersecurity operation, according to top-secret documents. The surveillance initiative, revealed Wednesday by CBC News in collaboration with The Intercept, is sifting through millions of emails sent to Canadian government agencies and departments, archiving details about them on a database for months or even years. The data mining operation is carried out by the Communications Security Establishment, or CSE, Canada's equivalent of the National Security Agency. Its existence is disclosed in documents obtained by The Intercept from NSA whistleblower Edward Snowden. The emails are vacuumed up by the Canadian agency as part of its mandate to defend against hacking attacks and malware targeting government computers. It relies on a system codenamed PONY EXPRESS to analyze the messages in a bid to detect potential cyber threats. Last year, CSE acknowledged it collected some private communications as part of cybersecurity efforts. But it refused to divulge the number of communications being stored or to explain for how long any intercepted messages would be retained. Now, the Snowden documents shine a light for the first time on the huge scope of the operation - exposing the controversial details the government withheld from the public. Under Canada's criminal code, CSE is no
Paul Merrell

WASHINGTON: CIA admits it broke into Senate computers; senators call for spy chief's ou... - 0 views

  • An internal CIA investigation confirmed allegations that agency personnel improperly intruded into a protected database used by Senate Intelligence Committee staff to compile a scathing report on the agency’s detention and interrogation program, prompting bipartisan outrage and at least two calls for spy chief John Brennan to resign.“This is very, very serious, and I will tell you, as a member of the committee, someone who has great respect for the CIA, I am extremely disappointed in the actions of the agents of the CIA who carried out this breach of the committee’s computers,” said Sen. Saxby Chambliss, R-Ga., the committee’s vice chairman.
  • The rare display of bipartisan fury followed a three-hour private briefing by Inspector General David Buckley. His investigation revealed that five CIA employees, two lawyers and three information technology specialists improperly accessed or “caused access” to a database that only committee staff were permitted to use.Buckley’s inquiry also determined that a CIA crimes report to the Justice Department alleging that the panel staff removed classified documents from a top-secret facility without authorization was based on “inaccurate information,” according to a summary of the findings prepared for the Senate and House intelligence committees and released by the CIA.In other conclusions, Buckley found that CIA security officers conducted keyword searches of the emails of staffers of the committee’s Democratic majority _ and reviewed some of them _ and that the three CIA information technology specialists showed “a lack of candor” in interviews with Buckley’s office.
  • The inspector general’s summary did not say who may have ordered the intrusion or when senior CIA officials learned of it.Following the briefing, some senators struggled to maintain their composure over what they saw as a violation of the constitutional separation of powers between an executive branch agency and its congressional overseers.“We’re the only people watching these organizations, and if we can’t rely on the information that we’re given as being accurate, then it makes a mockery of the entire oversight function,” said Sen. Angus King, an independent from Maine who caucuses with the Democrats.The findings confirmed charges by the committee chairwoman, Sen. Dianne Feinstein, D-Calif., that the CIA intruded into the database that by agreement was to be used by her staffers compiling the report on the harsh interrogation methods used by the agency on suspected terrorists held in secret overseas prisons under the George W. Bush administration.The findings also contradicted Brennan’s denials of Feinstein’s allegations, prompting two panel members, Sens. Mark Udall, D-Colo., and Martin Heinrich, D-N.M., to demand that the spy chief resign.
  • ...7 more annotations...
  • Another committee member, Sen. Ron Wyden, D-Ore., and some civil rights groups called for a fuller investigation. The demands clashed with a desire by President Barack Obama, other lawmakers and the CIA to move beyond the controversy over the “enhanced interrogation program” after Feinstein releases her committee’s report, which could come as soon as next weekMany members demanded that Brennan explain his earlier denial that the CIA had accessed the Senate committee database.“Director Brennan should make a very public explanation and correction of what he said,” said Sen. Carl Levin, D-Mich. He all but accused the Justice Department of a coverup by deciding not to pursue a criminal investigation into the CIA’s intrusion.
  • “I thought there might have been information that was produced after the department reached their conclusion,” he said. “What I understand, they have all of the information which the IG has.”He hinted that the scandal goes further than the individuals cited in Buckley’s report.“I think it’s very clear that CIA people knew exactly what they were doing and either knew or should’ve known,” said Levin, adding that he thought that Buckley’s findings should be referred to the Justice Department.A person with knowledge of the issue insisted that the CIA personnel who improperly accessed the database “acted in good faith,” believing that they were empowered to do so because they believed there had been a security violation.“There was no malicious intent. They acted in good faith believing they had the legal standing to do so,” said the knowledgeable person, who asked not to be further identified because they weren’t authorized to discuss the issue publicly. “But it did not conform with the legal agreement reached with the Senate committee.”
  • Feinstein called Brennan’s apology and his decision to submit Buckley’s findings to the accountability board “positive first steps.”“This IG report corrects the record and it is my understanding that a declassified report will be made available to the public shortly,” she said in a statement.“The investigation confirmed what I said on the Senate floor in March _ CIA personnel inappropriately searched Senate Intelligence Committee computers in violation of an agreement we had reached, and I believe in violation of the constitutional separation of powers,” she said.It was not clear why Feinstein didn’t repeat her charges from March that the agency also may have broken the law and had sought to “thwart” her investigation into the CIA’s use of waterboarding, which simulates drowning, sleep deprivation and other harsh interrogation methods _ tactics denounced by many experts as torture.
  • Buckley’s findings clashed with denials by Brennan that he issued only hours after Feinstein’s blistering Senate speech.“As far as the allegations of, you know, CIA hacking into, you know, Senate computers, nothing could be further from the truth. I mean, we wouldn’t do that. I mean, that’s _ that’s just beyond the _ you know, the scope of reason in terms of what we would do,” he said in an appearance at the Council on Foreign Relations.White House Press Secretary Josh Earnest issued a strong defense of Brennan, crediting him with playing an “instrumental role” in the administration’s fight against terrorism, in launching Buckley’s investigation and in looking for ways to prevent such occurrences in the future.Earnest was asked at a news briefing whether there was a credibility issue for Brennan, given his forceful denial in March.“Not at all,” he replied, adding that Brennan had suggested the inspector general’s investigation in the first place. And, he added, Brennan had taken the further step of appointing the accountability board to review the situation and the conduct of those accused of acting improperly to “ensure that they are properly held accountable for that conduct.”
  • The allegations and the separate CIA charge that the committee staff removed classified documents from the secret CIA facility in Northern Virginia without authorization were referred to the Justice Department for investigation.The department earlier this month announced that it had found insufficient evidence on which to proceed with criminal probes into either matter “at this time.” Thursday, Justice Department officials declined comment.
  • In her speech, Feinstein asserted that her staff found the material _ known as the Panetta review, after former CIA Director Leon Panetta, who ordered it _ in the protected database and that the CIA discovered the staff had it by monitoring its computers in violation of the user agreement.The inspector general’s summary, which was prepared for the Senate and the House intelligence committees, didn’t identify the CIA personnel who had accessed the Senate’s protected database.Furthermore, it said, the CIA crimes report to the Justice Department alleging that panel staffers had removed classified materials without permission was grounded on inaccurate information. The report is believed to have been sent by the CIA’s then acting general counsel, Robert Eatinger, who was a legal adviser to the interrogation program.“The factual basis for the referral was not supported, as the author of the referral had been provided inaccurate information on which the letter was based,” said the summary, noting that the Justice Department decided not to pursue the issue.
  • Christopher Anders, senior legislative counsel with the American Civil Liberties Union, criticized the CIA announcement, saying that “an apology isn’t enough.”“The Justice Department must refer the (CIA) inspector general’s report to a federal prosecutor for a full investigation into any crimes by CIA personnel or contractors,” said Anders.
  •  
    And no one but the lowest ranking staffer knew anything about it, not even the CIA lawyer who made the criminal referral to the Justice Dept., alleging that the Senate Intelligence Committee had accessed classified documents it wasn't authorized to access. So the Justice Dept. announces that there's insufficient evidence to warrant a criminal investigation. As though the CIA lawyer's allegations were not based on the unlawful surveillance of the Senate Intelligence Committee's network.  Can't we just get an official announcement that Attorney General Holder has decided that there shall be a cover-up? 
Paul Merrell

European Human Rights Court Deals a Heavy Blow to the Lawfulness of Bulk Surveillance |... - 0 views

  • In a seminal decision updating and consolidating its previous jurisprudence on surveillance, the Grand Chamber of the European Court of Human Rights took a sideways swing at mass surveillance programs last week, reiterating the centrality of “reasonable suspicion” to the authorization process and the need to ensure interception warrants are targeted to an individual or premises. The decision in Zakharov v. Russia — coming on the heels of the European Court of Justice’s strongly-worded condemnation in Schrems of interception systems that provide States with “generalised access” to the content of communications — is another blow to governments across Europe and the United States that continue to argue for the legitimacy and lawfulness of bulk collection programs. It also provoked the ire of the Russian government, prompting an immediate legislative move to give the Russian constitution precedence over Strasbourg judgments. The Grand Chamber’s judgment in Zakharov is especially notable because its subject matter — the Russian SORM system of interception, which includes the installation of equipment on telecommunications networks that subsequently enables the State direct access to the communications transiting through those networks — is similar in many ways to the interception systems currently enjoying public and judicial scrutiny in the United States, France, and the United Kingdom. Zakharov also provides a timely opportunity to compare the differences between UK and Russian law: Namely, Russian law requires prior independent authorization of interception measures, whereas neither the proposed UK law nor the existing legislative framework do.
  • The decision is lengthy and comprises a useful restatement and harmonization of the Court’s approach to standing (which it calls “victim status”) in surveillance cases, which is markedly different from that taken by the US Supreme Court. (Indeed, Judge Dedov’s separate but concurring opinion notes the contrast with Clapper v. Amnesty International.) It also addresses at length issues of supervision and oversight, as well as the role played by notification in ensuring the effectiveness of remedies. (Marko Milanovic discusses many of these issues here.) For the purpose of the ongoing debate around the legitimacy of bulk surveillance regimes under international human rights law, however, three particular conclusions of the Court are critical.
  • The Court took issue with legislation permitting the interception of communications for broad national, military, or economic security purposes (as well as for “ecological security” in the Russian case), absent any indication of the particular circumstances under which an individual’s communications may be intercepted. It said that such broadly worded statutes confer an “almost unlimited degree of discretion in determining which events or acts constitute such a threat and whether that threat is serious enough to justify secret surveillance” (para. 248). Such discretion cannot be unbounded. It can be limited through the requirement for prior judicial authorization of interception measures (para. 249). Non-judicial authorities may also be competent to authorize interception, provided they are sufficiently independent from the executive (para. 258). What is important, the Court said, is that the entity authorizing interception must be “capable of verifying the existence of a reasonable suspicion against the person concerned, in particular, whether there are factual indications for suspecting that person of planning, committing or having committed criminal acts or other acts that may give rise to secret surveillance measures, such as, for example, acts endangering national security” (para. 260). This finding clearly constitutes a significant threshold which a number of existing and pending European surveillance laws would not meet. For example, the existence of individualized reasonable suspicion runs contrary to the premise of signals intelligence programs where communications are intercepted in bulk; by definition, those programs collect information without any consideration of individualized suspicion. Yet the Court was clearly articulating the principle with national security-driven surveillance in mind, and with the knowledge that interception of communications in Russia is conducted by Russian intelligence on behalf of law enforcement agencies.
  • ...6 more annotations...
  • This element of the Grand Chamber’s decision distinguishes it from prior jurisprudence of the Court, namely the decisions of the Third Section in Weber and Saravia v. Germany (2006) and of the Fourth Section in Liberty and Ors v. United Kingdom (2008). In both cases, the Court considered legislative frameworks which enable bulk interception of communications. (In the German case, the Court used the term “strategic monitoring,” while it referred to “more general programmes of surveillance” in Liberty.) In the latter case, the Fourth Section sought to depart from earlier European Commission of Human Rights — the court of first instance until 1998 — decisions which developed the requirements of the law in the context of surveillance measures targeted at specific individuals or addresses. It took note of the Weber decision which “was itself concerned with generalized ‘strategic monitoring’, rather than the monitoring of individuals” and concluded that there was no “ground to apply different principles concerning the accessibility and clarity of the rules governing the interception of individual communications, on the one hand, and more general programmes of surveillance, on the other” (para. 63). The Court in Liberty made no mention of any need for any prior or reasonable suspicion at all.
  • In Weber, reasonable suspicion was addressed only at the post-interception stage; that is, under the German system, bulk intercepted data could be transmitted from the German Federal Intelligence Service (BND) to law enforcement authorities without any prior suspicion. The Court found that the transmission of personal data without any specific prior suspicion, “in order to allow the institution of criminal proceedings against those being monitored” constituted a fairly serious interference with individuals’ privacy rights that could only be remedied by safeguards and protections limiting the extent to which such data could be used (para. 125). (In the context of that case, the Court found that Germany’s protections and restrictions were sufficient.) When you compare the language from these three cases, it would appear that the Grand Chamber in Zakharov is reasserting the requirement for individualized reasonable suspicion, including in national security cases, with full knowledge of the nature of surveillance considered by the Court in its two recent bulk interception cases.
  • The requirement of reasonable suspicion is bolstered by the Grand Chamber’s subsequent finding in Zakharov that the interception authorization (e.g., the court order or warrant) “must clearly identify a specific person to be placed under surveillance or a single set of premises as the premises in respect of which the authorisation is ordered. Such identification may be made by names, addresses, telephone numbers or other relevant information” (para. 264). In making this finding, it references paragraphs from Liberty describing the broad nature of the bulk interception warrants under British law. In that case, it was this description that led the Court to find the British legislation possessed insufficient clarity on the scope or manner of exercise of the State’s discretion to intercept communications. In one sense, therefore, the Grand Chamber seems to be retroactively annotating the Fourth Section’s Liberty decision so that it might become consistent with its decision in Zakharov. Without this revision, the Court would otherwise appear to depart to some extent — arguably, purposefully — from both Liberty and Weber.
  • Finally, the Grand Chamber took issue with the direct nature of the access enjoyed by Russian intelligence under the SORM system. The Court noted that this contributed to rendering oversight ineffective, despite the existence of a requirement for prior judicial authorization. Absent an obligation to demonstrate such prior authorization to the communications service provider, the likelihood that the system would be abused through “improper action by a dishonest, negligent or overly zealous official” was quite high (para. 270). Accordingly, “the requirement to show an interception authorisation to the communications service provider before obtaining access to a person’s communications is one of the important safeguards against abuse by the law-enforcement authorities” (para. 269). Again, this requirement arguably creates an unconquerable barrier for a number of modern bulk interception systems, which rely on the use of broad warrants to authorize the installation of, for example, fiber optic cable taps that facilitate the interception of all communications that cross those cables. In the United Kingdom, the Independent Reviewer of Terrorism Legislation David Anderson revealed in his essential inquiry into British surveillance in 2015, there are only 20 such warrants in existence at any time. Even if these 20 warrants are served on the relevant communications service providers upon the installation of cable taps, the nature of bulk interception deprives this of any genuine meaning, making the safeguard an empty one. Once a tap is installed for the purposes of bulk interception, the provider is cut out of the equation and can no longer play the role the Court found so crucial in Zakharov.
  • The Zakharov case not only levels a serious blow at bulk, untargeted surveillance regimes, it suggests the Grand Chamber’s intention to actively craft European Court of Human Rights jurisprudence in a manner that curtails such regimes. Any suggestion that the Grand Chamber’s decision was issued in ignorance of the technical capabilities or intentions of States and the continued preference for bulk interception systems should be dispelled; the oral argument in the case took place in September 2014, at a time when the Court had already indicated its intention to accord priority to cases arising out of the Snowden revelations. Indeed, the Court referenced such forthcoming cases in the fact sheet it issued after the Zakharov judgment was released. Any remaining doubt is eradicated through an inspection of the multiple references to the Snowden revelations in the judgment itself. In the main judgment, the Court excerpted text from the Director of the European Union Agency for Human Rights discussing Snowden, and in the separate opinion issued by Judge Dedov, he goes so far as to quote Edward Snowden: “With each court victory, with every change in the law, we demonstrate facts are more convincing than fear. As a society, we rediscover that the value of the right is not in what it hides, but in what it protects.”
  • The full implications of the Zakharov decision remain to be seen. However, it is likely we will not have to wait long to know whether the Grand Chamber intends to see the demise of bulk collection schemes; the three UK cases (Big Brother Watch & Ors v. United Kingdom, Bureau of Investigative Journalism & Alice Ross v. United Kingdom, and 10 Human Rights Organisations v. United Kingdom) pending before the Court have been fast-tracked, indicating the Court’s willingness to continue to confront the compliance of bulk collection schemes with human rights law. It is my hope that the approach in Zakharov hints at the Court’s conviction that bulk collection schemes lie beyond the bounds of permissible State surveillance.
Paul Merrell

European Parliament Urges Protection for Edward Snowden - The New York Times - 0 views

  • The European Parliament narrowly adopted a nonbinding but nonetheless forceful resolution on Thursday urging the 28 nations of the European Union to recognize Edward J. Snowden as a “whistle-blower and international human rights defender” and shield him from prosecution.On Twitter, Mr. Snowden, the former National Security Agency contractor who leaked millions of documents about electronic surveillance by the United States government, called the vote a “game-changer.” But the resolution has no legal force and limited practical effect for Mr. Snowden, who is living in Russia on a three-year residency permit.Whether to grant Mr. Snowden asylum remains a decision for the individual European governments, and none have done so thus far. Continue reading the main story Related Coverage Open Source: Now Following the N.S.A. on Twitter, @SnowdenSEPT. 29, 2015 Snowden Sees Some Victories, From a DistanceMAY 19, 2015 Still, the resolution was the strongest statement of support seen for Mr. Snowden from the European Parliament. At the same time, the close vote — 285 to 281 — suggested the extent to which some European lawmakers are wary of alienating the United States.
  • The resolution calls on European Union members to “drop any criminal charges against Edward Snowden, grant him protection and consequently prevent extradition or rendition by third parties.”In June 2013, shortly after Mr. Snowden’s leaks became public, the United States charged him with theft of government property and violations of the Espionage Act of 1917. By then, he had flown to Moscow, where he spent weeks in legal limbo before he was granted temporary asylum and, later, a residency permit.Four Latin American nations have offered him permanent asylum, but he does not believe he could travel from Russia to those countries without running the risk of arrest and extradition to the United States along the way.
  • The White House, which has used diplomatic efforts to discourage even symbolic resolutions of support for Mr. Snowden, immediately criticized the resolution.“Our position has not changed,” said Ned Price, a spokesman for the National Security Council in Washington.“Mr. Snowden is accused of leaking classified information and faces felony charges here in the United States. As such, he should be returned to the U.S. as soon as possible, where he will be accorded full due process.”Jan Philipp Albrecht, one of the lawmakers who sponsored the resolution in Europe, said it should increase pressure on national governments.
  • ...1 more annotation...
  • “It’s the first time a Parliament votes to ask for this to be done — and it’s the European Parliament,” Mr. Albrecht, a German lawmaker with the Greens political bloc, said in a phone interview shortly after the vote, which was held in Strasbourg, France. “So this has an impact surely on the debate in the member states.”The resolution “is asking or demanding the member states’ governments to end all the charges and to prevent any extradition to a third party,” Mr. Albrecht said. “That’s a very clear call, and that can’t be just ignored by the governments,” he said.
Gary Edwards

Leahy scuttles his warrantless e-mail surveillance bill | Politics and Law - CNET News - 0 views

  •  
    Many thanks to FreedomWorks and the Center for Democracy and Technology for the fine work they did in opposing this tyranny of our government trying to take over the Internet. excerpt: "Sen. Patrick Leahy has abandoned his controversial proposal that would grant government agencies more surveillance power -- including warrantless access to Americans' e-mail accounts -- than they possess under current law. The Vermont Democrat said today on Twitter that he would "not support such an exception" for warrantless access. The remarks came a few hours after a CNET article was published this morning that disclosed the existence of the measure. A vote on the proposal in the Senate Judiciary committee, which Leahy chairs, is scheduled for next Thursday. The amendments were due to be glued onto a substitute (PDF) to H.R. 2471, which the House of Representatives already has approved. Leahy's about-face comes in response to a deluge of criticism today, including the American Civil Liberties Union saying that warrants should be required, and the conservative group FreedomWorks launching a petition to Congress -- with more than 2,300 messages sent so far -- titled: "Tell Congress: Stay Out of My Email!" A spokesman for the senator did not respond to questions today from CNET asking for clarification of what Leahy would support next week. (We'll update this article if we receive a response.) A Democratic aide to the Judiciary committee did, however, tell CNET this afternoon that Leahy does not support broad exceptions for warrantless searches of e-mail content. A note from Leahy's Twitter account added: "Technology has created vacuum in privacy protection. Sen. Leahy believes that needs to be fixed, and #ECPA needs privacy updates." That's a reference to the 1986 Electronic Communications Privacy Act, which currently does not require that police always obtain a warrant for the contents of e-mail and other communications. This revised position will come as a relief to privacy
Gary Edwards

Furious Over End Of Google Reader - Business Insider - 1 views

  •  
    "Gary Edwards on Mar 15, 8:25 PM said: There are only three apps i load at boot-up: gMail, gReader, and gWave. Ooops! Google Wave was cancelled over a year ago. Owning the end-users attention at boot-up proved to be an essential factor to the Microsoft monopoly. They built an iron fisted empire out of owning the point of boot-up. So it's very strange to see Google give up the very thing other cloud platform contenders would no doubt kill for. Very strange. Even stranger though is the perception that Google + will somehow now move to center stage? The only reason i use Google+ is because it's easy to point to an article and post a comment from Google Reader to my + circles. Other than that i have no use for +. Nicolas Carr posted an interesting comment on Google's cancellation of gReader yesterday. He tried to argue that there is a difference between "tools" and "platforms", and Google was more interested in building a platform than maintaining "tools" like gReader. So, Google+ is now essential to the Google Platform? Unfortunately, the otherwise brilliant and cosmic insightful Mr. Carr, fails to make that case. Microsoft became a platform when they succeeded in positioning their OS as the essential factor bridging an explosively innovative and rapidly commoditiz'ing Windows hardware reference platform, and, he equally rapid and innovative Windows software application platform. Both software and hardware were being written and developed to the Windows OS, with features doubling and costs being halved at a rate that even Moore's Law envied. Microsoft fully cemented the emerging hardware - OS - application platform with a business productivity environment that necessitated the use of the MS Office suite of servers and apps. That lock on business productivity has yet to be broken. And even though the mighty Google Apps has made some progress convincing businesses to rip-out-and-replace their legacy business productivity systems and re write to the Google Cloud P
Gary Edwards

The top 20 HTML5 sites of 2012 | Feature | .net magazine - 0 views

  •  
    Excellent review of great HTML5 Web Sites.  Includes quick reviews of tools and developer services for HTML5, CSS3, Canvas/SVG, and JavaScript.  (No JSON :()  Includes sites offering tutorials and demonstrations of how advanced, even spectacular, HTML5 builds.  This is clearly the kind of resource anyone involved with advancing HTML5 would like to return to and reference as the Web pushes forward.  Good Stuff Oli!!!! "2012 in review: HTML5 Doctor Oli Studholme nominates the websites that made best use of HTML5 this year, including a range of useful developer tools and online resources Another year has flown by, bringing the requisite slew of major changes. HTML5 is on track to be a recommendation in 2014, with W3C appointing four new editors to manage the W3C's HTML5 spec and putting the HTML5 spec on GitHub; and WHATWG focusing on the HTML Living Standard. Responsive design and Twitter Bootstrap went mainstream, IE10 was released (along with seven versions of Chrome and Firefox), and browser support continues to improve. It's impossible to pick only 20 ground-breaking sites from the thousands that did truly advance our collective game, but here's my attempt. For convenience, I've grouped them according to the way in which they use HTML5."
Gary Edwards

Readability / Clearly - Article Publishing Guidelines - Readability - 1 views

  •  
    Want to know how Evernote Clearly and Amazon Kindle Web Reader work?  This is it.  Both are made by Readability and in this guideline for Web publishers, they explain how they rip and parse a Web page.  The secret is solid HTML5!!!  "The following is a proposed standard for bringing more semanticity to articles on the Web. In our efforts to provide quality content without the superfluous leavings, we've seen that the Web is a pretty messy place. We hope that by providing some simple guidelines we can help publishers make their content a little more presentable with Readability while also making the Web a bit more semantic. By and large, you'll find that our guidelines just follow other specifications. We lean heavily on the work of the hNews microformat as well as the new elements provided within HTML5. If anything is unclear, please refer to the hNews microformat specification as well as this handy guide to semantic elements in html5, from Mark Pilgrim's Dive into HTML5. "
Paul Merrell

Long-Secret Stingray Manuals Detail How Police Can Spy on Phones - 0 views

  • Harris Corp.’s Stingray surveillance device has been one of the most closely guarded secrets in law enforcement for more than 15 years. The company and its police clients across the United States have fought to keep information about the mobile phone-monitoring boxes from the public against which they are used. The Intercept has obtained several Harris instruction manuals spanning roughly 200 pages and meticulously detailing how to create a cellular surveillance dragnet. Harris has fought to keep its surveillance equipment, which carries price tags in the low six figures, hidden from both privacy activists and the general public, arguing that information about the gear could help criminals. Accordingly, an older Stingray manual released under the Freedom of Information Act to news website TheBlot.com last year was almost completely redacted. So too have law enforcement agencies at every level, across the country, evaded almost all attempts to learn how and why these extremely powerful tools are being used — though court battles have made it clear Stingrays are often deployed without any warrant. The San Bernardino Sheriff’s Department alone has snooped via Stingray, sans warrant, over 300 times.
  • The documents described and linked below, instruction manuals for the software used by Stingray operators, were provided to The Intercept as part of a larger cache believed to have originated with the Florida Department of Law Enforcement. Two of them contain a “distribution warning” saying they contain “Proprietary Information and the release of this document and the information contained herein is prohibited to the fullest extent allowable by law.”  Although “Stingray” has become a catch-all name for devices of its kind, often referred to as “IMSI catchers,” the manuals include instructions for a range of other Harris surveillance boxes, including the Hailstorm, ArrowHead, AmberJack, and KingFish. They make clear the capability of those devices and the Stingray II to spy on cellphones by, at minimum, tracking their connection to the simulated tower, information about their location, and certain “over the air” electronic messages sent to and from them. Wessler added that parts of the manuals make specific reference to permanently storing this data, something that American law enforcement has denied doing in the past.
  • One piece of Windows software used to control Harris’s spy boxes, software that appears to be sold under the name “Gemini,” allows police to track phones across 2G, 3G, and LTE networks. Another Harris app, “iDen Controller,” provides a litany of fine-grained options for tracking phones. A law enforcement agent using these pieces of software along with Harris hardware could not only track a large number of phones as they moved throughout a city but could also apply nicknames to certain phones to keep track of them in the future. The manual describing how to operate iDEN, the lengthiest document of the four at 156 pages, uses an example of a target (called a “subscriber”) tagged alternately as Green Boy and Green Ben:
  • ...2 more annotations...
  • In order to maintain an uninterrupted connection to a target’s phone, the Harris software also offers the option of intentionally degrading (or “redirecting”) someone’s phone onto an inferior network, for example, knocking a connection from LTE to 2G:
  • A video of the Gemini software installed on a personal computer, obtained by The Intercept and embedded below, provides not only an extensive demonstration of the app but also underlines how accessible the mass surveillance code can be: Installing a complete warrantless surveillance suite is no more complicated than installing Skype. Indeed, software such as Photoshop or Microsoft Office, which require a registration key or some other proof of ownership, are more strictly controlled by their makers than software designed for cellular interception.
Paul Merrell

Microsoft Pitches Technology That Can Read Facial Expressions at Political Rallies - 0 views

  • On the 21st floor of a high-rise hotel in Cleveland, in a room full of political operatives, Microsoft’s Research Division was advertising a technology that could read each facial expression in a massive crowd, analyze the emotions, and report back in real time. “You could use this at a Trump rally,” a sales representative told me. At both the Republican and Democratic conventions, Microsoft sponsored event spaces for the news outlet Politico. Politico, in turn, hosted a series of Microsoft-sponsored discussions about the use of data technology in political campaigns. And throughout Politico’s spaces in both Philadelphia and Cleveland, Microsoft advertised an array of products from “Microsoft Cognitive Services,” its artificial intelligence and cloud computing division. At one exhibit, titled “Realtime Crowd Insights,” a small camera scanned the room, while a monitor displayed the captured image. Every five seconds, a new image would appear with data annotated for each face — an assigned serial number, gender, estimated age, and any emotions detected in the facial expression. When I approached, the machine labeled me “b2ff” and correctly identified me as a 23-year-old male.
  • “Realtime Crowd Insights” is an Application Programming Interface (API), or a software tool that connects web applications to Microsoft’s cloud computing services. Through Microsoft’s emotional analysis API — a component of Realtime Crowd Insights — applications send an image to Microsoft’s servers. Microsoft’s servers then analyze the faces and return emotional profiles for each one. In a November blog post, Microsoft said that the emotional analysis could detect “anger, contempt, fear, disgust, happiness, neutral, sadness or surprise.” Microsoft’s sales representatives told me that political campaigns could use the technology to measure the emotional impact of different talking points — and political scientists could use it to study crowd response at rallies.
  • Facial recognition technology — the identification of faces by name — is already widely used in secret by law enforcement, sports stadiums, retail stores, and even churches, despite being of questionable legality. As early as 2002, facial recognition technology was used at the Super Bowl to cross-reference the 100,000 attendees to a database of the faces of known criminals. The technology is controversial enough that in 2013, Google tried to ban the use of facial recognition apps in its Google glass system. But “Realtime Crowd Insights” is not true facial recognition — it could not identify me by name, only as “b2ff.” It did, however, store enough data on each face that it could continuously identify it with the same serial number, even hours later. The display demonstrated that capability by distinguishing between the number of total faces it had seen, and the number of unique serial numbers. Photo: Alex Emmons
  • ...2 more annotations...
  • Instead, “Realtime Crowd Insights” is an example of facial characterization technology — where computers analyze faces without necessarily identifying them. Facial characterization has many positive applications — it has been tested in the classroom, as a tool for spotting struggling students, and Microsoft has boasted that the tool will even help blind people read the faces around them. But facial characterization can also be used to assemble and store large profiles of information on individuals, even anonymously.
  • Alvaro Bedoya, a professor at Georgetown Law School and expert on privacy and facial recognition, has hailed that code of conduct as evidence that Microsoft is trying to do the right thing. But he pointed out that it leaves a number of questions unanswered — as illustrated in Cleveland and Philadelphia. “It’s interesting that the app being shown at the convention ‘remembered’ the faces of the people who walked by. That would seem to suggest that their faces were being stored and processed without the consent that Microsoft’s policy requires,” Bedoya said. “You have to wonder: What happened to the face templates of the people who walked by that booth? Were they deleted? Or are they still in the system?” Microsoft officials declined to comment on exactly what information is collected on each face and what data is retained or stored, instead referring me to their privacy policy, which does not address the question. Bedoya also pointed out that Microsoft’s marketing did not seem to match the consent policy. “It’s difficult to envision how companies will obtain consent from people in large crowds or rallies.”
  •  
    But nobody is saying that the output of this technology can't be combined with the output of facial recognition technology to let them monitor you individually AND track your emotions. Fortunately, others are fighting back with knowledge and tech to block facial recognition. http://goo.gl/JMQM2W
Gary Edwards

9 Open Source Big Data Technologies Set to Change the Web - 0 views

  •  
    Good run down of new big data technologies for Cloud Computing. The technologies are arranged in context beginning with Apache Hadoop and including HBase and MongoDb. Nice reference article.
Paul Merrell

Hyperlinking is Not Copyright Infringement, EU Court Rules | TorrentFreak - 0 views

  • Does publishing a hyperlink to freely available content amount to an illegal communication to the public and therefore a breach of creator's copyrights under European law? After examining a case referred to it by Sweden's Court of Appeal, the Court of Justice of the European Union has ruled today that no, it does not.
  • One such case, referred to the CJEU by Sweden’s Court of Appeal, is of particular interest to Internet users as it concerns the very mechanism that holds the web together. The dispute centers on a company called Retriever Sverige AB, an Internet-based subscription service that indexes links to articles that can be found elsewhere online for free. The problem came when Retriever published links to articles published on a newspaper’s website that were written by Swedish journalists. The company felt that it did not have to compensate the journalists for simply linking to their articles, nor did it believe that embedding them within its site amounted to copyright infringement. The journalists, on the other hand, felt that by linking to their articles Retriever had “communicated” their works to the public without permission. In the belief they should be paid, the journalists took their case to the Stockholm District Court. They lost their case in 2010 and decided to take the case to appeal. From there the Svea Court of Appeal sought advice from the EU Court. Today the Court of Justice published its lengthy decision and it’s largely good news for the Internet.
Gary Edwards

WhiteHat Aviator - The most secure browser online - 1 views

  •  
    "FREQUENTLY ASKED QUESTIONS What is WhiteHat Aviator? WhiteHat Aviator; is the most secure , most private Web browser available anywhere. By default, it provides an easy way to bank, shop, and use social networks while stopping viruses from infecting computers, preventing accounts from being hacked, and blocking advertisers from invisibly spying on every click. Why do I need a secure Web browser? According to CA Technologies, 84 percent of hacker attacks in 2009 took advantage of vulnerabilities in Web browsers. Similarly, Symantec found that four of the top five vulnerabilities being exploited were client-side vulnerabilities that were frequently targeted by Web-based attacks. The fact is, that when you visit any website you run the risk of having your surfing history, passwords, real name, workplace, home address, phone number, email, gender, political affiliation, sexual preferences, income bracket, education level, and medical history stolen - and your computer infected with viruses. Sadly, this happens on millions of websites every day. Before you have any chance at protecting yourself, other browsers force you to follow complicated how-to guides, modify settings that only serve advertising empires and install obscure third-party software. What makes WhiteHat Aviator so secure? WhiteHat Aviator; is built on Chromium, the same open-source foundation used by Google Chrome. Chromium has several unique, powerful security features. One is a "sandbox" that prevents websites from stealing files off your computer or infecting it with viruses. As good as Chromium is, we went much further to create the safest online experience possible. WhiteHat Aviator comes ready-to-go with hardened security and privacy settings, giving hackers less to work with. And our browser downloads to you - without any hidden user-tracking functionality. Our default search engine is DuckDuckGo - not Google, which logs your activity. For good measure, Aviator integrates Disconnect
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Gary Edwards

Two Microsofts: Mulling an alternate reality | ZDNet - 0 views

  • Judge Jackson had it right. And the Court of Appeals? Not so much
  • Judge Jackson is an American hero and news of his passing thumped me hard. His ruling against Microsoft and the subsequent overturn of that ruling resulted, IMHO, in two extraordinary directions that changed the world. Sure the what-if game is interesting, but the reality itself is stunning enough. Of course, Judge Jackson sought to break the monopoly. The US Court of Appeals overturn resulted in the monopoly remaining intact, but the Internet remaining free and open. Judge Jackson's breakup plan had a good shot at achieving both a breakup of the monopoly and, a free and open Internet. I admit though that at the time I did not favor the Judge's plan. And i actually did submit a proposal based on Microsoft having to both support the WiNE project, and, provide a complete port to WiNE to any software provider requesting a port. I wanted to break the monopolist's hold on the Windows Productivity Environment and the hundreds of millions of investment dollars and time that had been spent on application development forever trapped on that platform. For me, it was the productivity platform that had to be broken.
  • I assume the good Judge thought that separating the Windows OS from Microsoft Office / Applications would force the OS to open up the secret API's even as the OS continued to evolve. Maybe. But a full disclosure of the API's coupled with the community service "port to WiNE" requirement might have sped up the process. Incredibly, the "Undocumented Windows Secrets" industry continues to thrive, and the legendary Andrew Schulman's number is still at the top of Silicon Valley legal profession speed dials. http://goo.gl/0UGe8 Oh well. The Court of Appeals stopped the breakup, leaving the Windows Productivity Platform intact. Microsoft continues to own the "client" in "Client/Server" computing. Although Microsoft was temporarily stopped from leveraging their desktop monopoly to an iron fisted control and dominance of the Internet, I think what were watching today with the Cloud is Judge Jackson's worst nightmare. And mine too. A great transition is now underway, as businesses and enterprises begin the move from legacy client/server business systems and processes to a newly emerging Cloud Productivity Platform. In this great transition, Microsoft holds an inside straight. They have all the aces because they own the legacy desktop productivity platform, and can control the transition to the Cloud. No doubt this transition is going to happen. And it will severely disrupt and change Microsoft's profit formula. But if the Redmond reprobate can provide a "value added" transition of legacy business systems and processes, and direct these new systems to the Microsoft Cloud, the profits will be immense.
  • ...1 more annotation...
  • Judge Jackson sought to break the ability of Microsoft to "leverage" their existing monopoly into the Internet and his plan was overturned and replaced by one based on judicial oversight. Microsoft got a slap on the wrist from the Court of Appeals, but were wailed on with lawsuits from the hundreds of parties injured by their rampant criminality. Some put the price of that criminality as high as $14 Billion in settlements. Plus, the shareholders forced Chairman Bill to resign. At the end of the day though, Chairman Bill was right. Keeping the monopoly intact was worth whatever penalty Microsoft was forced to pay. He knew that even the judicial over-site would end one day. Which it did. And now his company is ready to go for it all by leveraging and controlling the great productivity transition. No business wants to be hostage to a cold heart'd monopolist. But there is huge difference between a non-disruptive and cost effective, process-by-process value-added transition to a Cloud Productivity Platform, and, the very disruptive and costly "rip-out-and-replace" transition offered by Google, ZOHO, Box, SalesForce and other Cloud Productivity contenders. Microsoft, and only Microsoft, can offer the value-added transition path. If they get the Cloud even halfway right, they will own business productivity far into the future. Rest in Peace Judge Jackson. Your efforts were heroic and will be remembered as such. ~ge~
  •  
    Comments on the latest SVN article mulling the effects of Judge Thomas Penfield Jackson's anti trust ruling and proposed break up of Microsoft. comment: "Chinese Wall" Ummm, there was a Chinese Wall between Microsoft Os and the MS Applciations layer. At least that's what Chairman Bill promised developers at a 1990 OS/2-Windows Conference I attended. It was a developers luncheon, hosted by Microsoft, with Chairman Bill speaking to about 40 developers with applications designed to run on the then soon to be released Windows 3.0. In his remarks, the Chairman described his vision of commoditizing the personal computer market through an open hardware-reference platform on the one side of the Windows OS, and provisioning an open application developers layer on the other using open and totally transparent API's. Of course the question came up concerning the obvious advantage Microsoft applications would have. Chairman Bill answered the question by describing the Chinese Wall that existed between Microsoft's OS and Apps develop departments. He promised that OS API's would be developed privately and separate from the Apps department, and publicly disclosed to ALL developers at the same time. Oh yeah. There was lots of anti IBM - evil empire stuff too :) Of course we now know this was a line of crap. Microsoft Apps was discovered to have been using undocumented and secret Window API's. http://goo.gl/0UGe8. Microsoft Apps had a distinct advantage over the competition, and eventually the entire Windows Productivity Platform became dependent on the MSOffice core. The company I worked for back then, Pyramid Data, had the first Contact Management application for Windows; PowerLeads. Every Friday night we would release bug fixes and improvements using Wildcat BBS. By Monday morning we would be slammed with calls from users complaining that they had downloaded the Friday night patch, and now some other application would not load or function properly. Eventually we tracked th
Paul Merrell

From Radio to Porn, British Spies Track Web Users' Online Identities - 0 views

  • HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the Internet.” Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media and news websites, search engines, chat forums, and blogs. The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into Internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
  • Amid a renewed push from the U.K. government for more surveillance powers, more than two dozen documents being disclosed today by The Intercept reveal for the first time several major strands of GCHQ’s existing electronic eavesdropping capabilities.
  • The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails and Internet browsing logs of Brits, Americans, and any other citizens — all without a court order or judicial warrant
  • ...17 more annotations...
  • A huge volume of the Internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis. Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events” — a term the agency uses to refer to metadata records — with about 10 billion new entries added every day. As of March 2009, the largest slice of data Black Hole held — 41 percent — was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.
  • Throughout this period, as smartphone sales started to boom, the frequency of people’s Internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data. By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it said would be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.”
  • A document from the GCHQ target analysis center (GTAC) shows the Black Hole repository’s structure.
  • The data is searched by GCHQ analysts in a hunt for behavior online that could be connected to terrorism or other criminal activity. But it has also served a broader and more controversial purpose — helping the agency hack into European companies’ computer networks. In the lead up to its secret mission targeting Netherlands-based Gemalto, the largest SIM card manufacturer in the world, GCHQ used MUTANT BROTH in an effort to identify the company’s employees so it could hack into their computers. The system helped the agency analyze intercepted Facebook cookies it believed were associated with Gemalto staff located at offices in France and Poland. GCHQ later successfully infiltrated Gemalto’s internal networks, stealing encryption keys produced by the company that protect the privacy of cell phone communications.
  • Similarly, MUTANT BROTH proved integral to GCHQ’s hack of Belgian telecommunications provider Belgacom. The agency entered IP addresses associated with Belgacom into MUTANT BROTH to uncover information about the company’s employees. Cookies associated with the IPs revealed the Google, Yahoo, and LinkedIn accounts of three Belgacom engineers, whose computers were then targeted by the agency and infected with malware. The hacking operation resulted in GCHQ gaining deep access into the most sensitive parts of Belgacom’s internal systems, granting British spies the ability to intercept communications passing through the company’s networks.
  • In March, a U.K. parliamentary committee published the findings of an 18-month review of GCHQ’s operations and called for an overhaul of the laws that regulate the spying. The committee raised concerns about the agency gathering what it described as “bulk personal datasets” being held about “a wide range of people.” However, it censored the section of the report describing what these “datasets” contained, despite acknowledging that they “may be highly intrusive.” The Snowden documents shine light on some of the core GCHQ bulk data-gathering programs that the committee was likely referring to — pulling back the veil of secrecy that has shielded some of the agency’s most controversial surveillance operations from public scrutiny. KARMA POLICE and MUTANT BROTH are among the key bulk collection systems. But they do not operate in isolation — and the scope of GCHQ’s spying extends far beyond them.
  • The agency operates a bewildering array of other eavesdropping systems, each serving its own specific purpose and designated a unique code name, such as: SOCIAL ANTHROPOID, which is used to analyze metadata on emails, instant messenger chats, social media connections and conversations, plus “telephony” metadata about phone calls, cell phone locations, text and multimedia messages; MEMORY HOLE, which logs queries entered into search engines and associates each search with an IP address; MARBLED GECKO, which sifts through details about searches people have entered into Google Maps and Google Earth; and INFINITE MONKEYS, which analyzes data about the usage of online bulletin boards and forums. GCHQ has other programs that it uses to analyze the content of intercepted communications, such as the full written body of emails and the audio of phone calls. One of the most important content collection capabilities is TEMPORA, which mines vast amounts of emails, instant messages, voice calls and other communications and makes them accessible through a Google-style search tool named XKEYSCORE.
  • As of September 2012, TEMPORA was collecting “more than 40 billion pieces of content a day” and it was being used to spy on people across Europe, the Middle East, and North Africa, according to a top-secret memo outlining the scope of the program. The existence of TEMPORA was first revealed by The Guardian in June 2013. To analyze all of the communications it intercepts and to build a profile of the individuals it is monitoring, GCHQ uses a variety of different tools that can pull together all of the relevant information and make it accessible through a single interface. SAMUEL PEPYS is one such tool, built by the British spies to analyze both the content and metadata of emails, browsing sessions, and instant messages as they are being intercepted in real time. One screenshot of SAMUEL PEPYS in action shows the agency using it to monitor an individual in Sweden who visited a page about GCHQ on the U.S.-based anti-secrecy website Cryptome.
  • Partly due to the U.K.’s geographic location — situated between the United States and the western edge of continental Europe — a large amount of the world’s Internet traffic passes through its territory across international data cables. In 2010, GCHQ noted that what amounted to “25 percent of all Internet traffic” was transiting the U.K. through some 1,600 different cables. The agency said that it could “survey the majority of the 1,600” and “select the most valuable to switch into our processing systems.”
  • According to Joss Wright, a research fellow at the University of Oxford’s Internet Institute, tapping into the cables allows GCHQ to monitor a large portion of foreign communications. But the cables also transport masses of wholly domestic British emails and online chats, because when anyone in the U.K. sends an email or visits a website, their computer will routinely send and receive data from servers that are located overseas. “I could send a message from my computer here [in England] to my wife’s computer in the next room and on its way it could go through the U.S., France, and other countries,” Wright says. “That’s just the way the Internet is designed.” In other words, Wright adds, that means “a lot” of British data and communications transit across international cables daily, and are liable to be swept into GCHQ’s databases.
  • A map from a classified GCHQ presentation about intercepting communications from undersea cables. GCHQ is authorized to conduct dragnet surveillance of the international data cables through so-called external warrants that are signed off by a government minister. The external warrants permit the agency to monitor communications in foreign countries as well as British citizens’ international calls and emails — for example, a call from Islamabad to London. They prohibit GCHQ from reading or listening to the content of “internal” U.K. to U.K. emails and phone calls, which are supposed to be filtered out from GCHQ’s systems if they are inadvertently intercepted unless additional authorization is granted to scrutinize them. However, the same rules do not apply to metadata. A little-known loophole in the law allows GCHQ to use external warrants to collect and analyze bulk metadata about the emails, phone calls, and Internet browsing activities of British people, citizens of closely allied countries, and others, regardless of whether the data is derived from domestic U.K. to U.K. communications and browsing sessions or otherwise. In March, the existence of this loophole was quietly acknowledged by the U.K. parliamentary committee’s surveillance review, which stated in a section of its report that “special protection and additional safeguards” did not apply to metadata swept up using external warrants and that domestic British metadata could therefore be lawfully “returned as a result of searches” conducted by GCHQ.
  • Perhaps unsurprisingly, GCHQ appears to have readily exploited this obscure legal technicality. Secret policy guidance papers issued to the agency’s analysts instruct them that they can sift through huge troves of indiscriminately collected metadata records to spy on anyone regardless of their nationality. The guidance makes clear that there is no exemption or extra privacy protection for British people or citizens from countries that are members of the Five Eyes, a surveillance alliance that the U.K. is part of alongside the U.S., Canada, Australia, and New Zealand. “If you are searching a purely Events only database such as MUTANT BROTH, the issue of location does not occur,” states one internal GCHQ policy document, which is marked with a “last modified” date of July 2012. The document adds that analysts are free to search the databases for British metadata “without further authorization” by inputing a U.K. “selector,” meaning a unique identifier such as a person’s email or IP address, username, or phone number. Authorization is “not needed for individuals in the U.K.,” another GCHQ document explains, because metadata has been judged “less intrusive than communications content.” All the spies are required to do to mine the metadata troves is write a short “justification” or “reason” for each search they conduct and then click a button on their computer screen.
  • Intelligence GCHQ collects on British persons of interest is shared with domestic security agency MI5, which usually takes the lead on spying operations within the U.K. MI5 conducts its own extensive domestic surveillance as part of a program called DIGINT (digital intelligence).
  • GCHQ’s documents suggest that it typically retains metadata for periods of between 30 days to six months. It stores the content of communications for a shorter period of time, varying between three to 30 days. The retention periods can be extended if deemed necessary for “cyber defense.” One secret policy paper dated from January 2010 lists the wide range of information the agency classes as metadata — including location data that could be used to track your movements, your email, instant messenger, and social networking “buddy lists,” logs showing who you have communicated with by phone or email, the passwords you use to access “communications services” (such as an email account), and information about websites you have viewed.
  • Records showing the full website addresses you have visited — for instance, www.gchq.gov.uk/what_we_do — are treated as content. But the first part of an address you have visited — for instance, www.gchq.gov.uk — is treated as metadata. In isolation, a single metadata record of a phone call, email, or website visit may not reveal much about a person’s private life, according to Ethan Zuckerman, director of Massachusetts Institute of Technology’s Center for Civic Media. But if accumulated and analyzed over a period of weeks or months, these details would be “extremely personal,” he told The Intercept, because they could reveal a person’s movements, habits, religious beliefs, political views, relationships, and even sexual preferences. For Zuckerman, who has studied the social and political ramifications of surveillance, the most concerning aspect of large-scale government data collection is that it can be “corrosive towards democracy” — leading to a chilling effect on freedom of expression and communication. “Once we know there’s a reasonable chance that we are being watched in one fashion or another it’s hard for that not to have a ‘panopticon effect,’” he said, “where we think and behave differently based on the assumption that people may be watching and paying attention to what we are doing.”
  • When compared to surveillance rules in place in the U.S., GCHQ notes in one document that the U.K. has “a light oversight regime.” The more lax British spying regulations are reflected in secret internal rules that highlight greater restrictions on how NSA databases can be accessed. The NSA’s troves can be searched for data on British citizens, one document states, but they cannot be mined for information about Americans or other citizens from countries in the Five Eyes alliance. No such constraints are placed on GCHQ’s own databases, which can be sifted for records on the phone calls, emails, and Internet usage of Brits, Americans, and citizens from any other country. The scope of GCHQ’s surveillance powers explain in part why Snowden told The Guardian in June 2013 that U.K. surveillance is “worse than the U.S.” In an interview with Der Spiegel in July 2013, Snowden added that British Internet cables were “radioactive” and joked: “Even the Queen’s selfies to the pool boy get logged.”
  • In recent years, the biggest barrier to GCHQ’s mass collection of data does not appear to have come in the form of legal or policy restrictions. Rather, it is the increased use of encryption technology that protects the privacy of communications that has posed the biggest potential hindrance to the agency’s activities. “The spread of encryption … threatens our ability to do effective target discovery/development,” says a top-secret report co-authored by an official from the British agency and an NSA employee in 2011. “Pertinent metadata events will be locked within the encrypted channels and difficult, if not impossible, to prise out,” the report says, adding that the agencies were working on a plan that would “(hopefully) allow our Internet Exploitation strategy to prevail.”
1 - 20 of 86 Next › Last »
Showing 20 items per page