Skip to main content

Home/ Future of the Web/ Group items tagged is

Rss Feed Group items tagged

Gary Edwards

Skynet rising: Google acquires 512-qubit quantum computer; NSA surveillance to be turne... - 0 views

  •  
    "The ultimate code breakers" If you know anything about encryption, you probably also realize that quantum computers are the secret KEY to unlocking all encrypted files. As I wrote about last year here on Natural News, once quantum computers go into widespread use by the NSA, the CIA, Google, etc., there will be no more secrets kept from the government. All your files - even encrypted files - will be easily opened and read. Until now, most people believed this day was far away. Quantum computing is an "impractical pipe dream," we've been told by scowling scientists and "flat Earth" computer engineers. "It's not possible to build a 512-qubit quantum computer that actually works," they insisted. Don't tell that to Eric Ladizinsky, co-founder and chief scientist of a company called D-Wave. Because Ladizinsky's team has already built a 512-qubit quantum computer. And they're already selling them to wealthy corporations, too. DARPA, Northrup Grumman and Goldman Sachs In case you're wondering where Ladizinsky came from, he's a former employee of Northrup Grumman Space Technology (yes, a weapons manufacturer) where he ran a multi-million-dollar quantum computing research project for none other than DARPA - the same group working on AI-driven armed assault vehicles and battlefield robots to replace human soldiers. .... When groundbreaking new technology is developed by smart people, it almost immediately gets turned into a weapon. Quantum computing will be no different. This technology grants God-like powers to police state governments that seek to dominate and oppress the People.  ..... Google acquires "Skynet" quantum computers from D-Wave According to an article published in Scientific American, Google and NASA have now teamed up to purchase a 512-qubit quantum computer from D-Wave. The computer is called "D-Wave Two" because it's the second generation of the system. The first system was a 128-qubit computer. Gen two
  •  
    Normally, I'd be suspicious of anything published by Infowars because its editors are willing to publish really over the top stuff, but: [i] this is subject matter I've maintained an interest in over the years and I was aware that working quantum computers were imminent; and [ii] the pedigree on this particular information does not trace to Scientific American, as stated in the article. I've known Scientific American to publish at least one soothing and lengthy article on the subject of chlorinated dioxin hazard -- my specialty as a lawyer was litigating against chemical companies that generated dioxin pollution -- that was generated by known closet chemical industry advocates long since discredited and was totally lacking in scientific validity and contrary to established scientific knowledge. So publication in Scientific American doesn't pack a lot of weight with me. But checking the Scientific American linked article, notes that it was reprinted by permission from Nature, a peer-reviewed scientific journal and news organization that I trust much more. That said, the InfoWars version is a rewrite that contains lots of information not in the Nature/Scientific American version of a sensationalist nature, so heightened caution is still in order. Check the reprinted Nature version before getting too excited: "The D-Wave computer is not a 'universal' computer that can be programmed to tackle any kind of problem. But scientists have found they can usefully frame questions in machine-learning research as optimisation problems. "D-Wave has battled to prove that its computer really operates on a quantum level, and that it is better or faster than a conventional computer. Before striking the latest deal, the prospective customers set a series of tests for the quantum computer. D-Wave hired an outside expert in algorithm-racing, who concluded that the speed of the D-Wave Two was above average overall, and that it was 3,600 times faster than a leading conventional comput
Paul Merrell

Internet users raise funds to buy lawmakers' browsing histories in protest | TheHill - 0 views

  • House passes bill undoing Obama internet privacy rule House passes bill undoing Obama internet privacy rule TheHill.com Mesmerizing Slow-Motion Lightning Celebrate #NationalPuppyDay with some adorable puppies on Instagram 5 plants to add to your garden this Spring House passes bill undoing Obama internet privacy rule Inform News. Coming Up... Ed Sheeran responds to his 'baby lookalike' margin: 0px; padding: 0px; borde
  • Great news! The House just voted to pass SJR34. We will finally be able to buy the browser history of all the Congresspeople who voted to sell our data and privacy without our consent!” he wrote on the fundraising page.Another activist from Tennessee has raised more than $152,000 from more than 9,800 people.A bill on its way to President Trump’s desk would allow internet service providers (ISPs) to sell users’ data and Web browsing history. It has not taken effect, which means there is no growing history data yet to purchase.A Washington Post reporter also wrote it would be possible to buy the data “in theory, but probably not in reality.”A former enforcement bureau chief at the Federal Communications Commission told the newspaper that most internet service providers would cover up this information, under their privacy policies. If they did sell any individual's personal data in violation of those policies, a state attorney general could take the ISPs to court.
Paul Merrell

Exclusive: Inside America's Plan to Kill Online Privacy Rights Everywhere | The Cable - 0 views

  • The United States and its key intelligence allies are quietly working behind the scenes to kneecap a mounting movement in the United Nations to promote a universal human right to online privacy, according to diplomatic sources and an internal American government document obtained by The Cable. The diplomatic battle is playing out in an obscure U.N. General Assembly committee that is considering a proposal by Brazil and Germany to place constraints on unchecked internet surveillance by the National Security Agency and other foreign intelligence services. American representatives have made it clear that they won't tolerate such checks on their global surveillance network. The stakes are high, particularly in Washington -- which is seeking to contain an international backlash against NSA spying -- and in Brasilia, where Brazilian President Dilma Roussef is personally involved in monitoring the U.N. negotiations.
  • The Brazilian and German initiative seeks to apply the right to privacy, which is enshrined in the International Covenant on Civil and Political Rights (ICCPR), to online communications. Their proposal, first revealed by The Cable, affirms a "right to privacy that is not to be subjected to arbitrary or unlawful interference with their privacy, family, home, or correspondence." It notes that while public safety may "justify the gathering and protection of certain sensitive information," nations "must ensure full compliance" with international human rights laws. A final version the text is scheduled to be presented to U.N. members on Wednesday evening and the resolution is expected to be adopted next week. A draft of the resolution, which was obtained by The Cable, calls on states to "to respect and protect the right to privacy," asserting that the "same rights that people have offline must also be protected online, including the right to privacy." It also requests the U.N. high commissioner for human rights, Navi Pillay, present the U.N. General Assembly next year with a report on the protection and promotion of the right to privacy, a provision that will ensure the issue remains on the front burner.
  • Publicly, U.S. representatives say they're open to an affirmation of privacy rights. "The United States takes very seriously our international legal obligations, including those under the International Covenant on Civil and Political Rights," Kurtis Cooper, a spokesman for the U.S. mission to the United Nations, said in an email. "We have been actively and constructively negotiating to ensure that the resolution promotes human rights and is consistent with those obligations." But privately, American diplomats are pushing hard to kill a provision of the Brazilian and German draft which states that "extraterritorial surveillance" and mass interception of communications, personal information, and metadata may constitute a violation of human rights. The United States and its allies, according to diplomats, outside observers, and documents, contend that the Covenant on Civil and Political Rights does not apply to foreign espionage.
  • ...6 more annotations...
  • n recent days, the United States circulated to its allies a confidential paper highlighting American objectives in the negotiations, "Right to Privacy in the Digital Age -- U.S. Redlines." It calls for changing the Brazilian and German text so "that references to privacy rights are referring explicitly to States' obligations under ICCPR and remove suggestion that such obligations apply extraterritorially." In other words: America wants to make sure it preserves the right to spy overseas. The U.S. paper also calls on governments to promote amendments that would weaken Brazil's and Germany's contention that some "highly intrusive" acts of online espionage may constitute a violation of freedom of expression. Instead, the United States wants to limit the focus to illegal surveillance -- which the American government claims it never, ever does. Collecting information on tens of millions of people around the world is perfectly acceptable, the Obama administration has repeatedly said. It's authorized by U.S. statute, overseen by Congress, and approved by American courts.
  • "Recall that the USG's [U.S. government's] collection activities that have been disclosed are lawful collections done in a manner protective of privacy rights," the paper states. "So a paragraph expressing concern about illegal surveillance is one with which we would agree." The privacy resolution, like most General Assembly decisions, is neither legally binding nor enforceable by any international court. But international lawyers say it is important because it creates the basis for an international consensus -- referred to as "soft law" -- that over time will make it harder and harder for the United States to argue that its mass collection of foreigners' data is lawful and in conformity with human rights norms. "They want to be able to say ‘we haven't broken the law, we're not breaking the law, and we won't break the law,'" said Dinah PoKempner, the general counsel for Human Rights Watch, who has been tracking the negotiations. The United States, she added, wants to be able to maintain that "we have the freedom to scoop up anything we want through the massive surveillance of foreigners because we have no legal obligations."
  • The United States negotiators have been pressing their case behind the scenes, raising concerns that the assertion of extraterritorial human rights could constrain America's effort to go after international terrorists. But Washington has remained relatively muted about their concerns in the U.N. negotiating sessions. According to one diplomat, "the United States has been very much in the backseat," leaving it to its allies, Australia, Britain, and Canada, to take the lead. There is no extraterritorial obligation on states "to comply with human rights," explained one diplomat who supports the U.S. position. "The obligation is on states to uphold the human rights of citizens within their territory and areas of their jurisdictions."
  • The position, according to Jamil Dakwar, the director of the American Civil Liberties Union's Human Rights Program, has little international backing. The International Court of Justice, the U.N. Human Rights Committee, and the European Court have all asserted that states do have an obligation to comply with human rights laws beyond their own borders, he noted. "Governments do have obligation beyond their territories," said Dakwar, particularly in situations, like the Guantanamo Bay detention center, where the United States exercises "effective control" over the lives of the detainees. Both PoKempner and Dakwar suggested that courts may also judge that the U.S. dominance of the Internet places special legal obligations on it to ensure the protection of users' human rights.
  • "It's clear that when the United States is conducting surveillance, these decisions and operations start in the United States, the servers are at NSA headquarters, and the capabilities are mainly in the United States," he said. "To argue that they have no human rights obligations overseas is dangerous because it sends a message that there is void in terms of human rights protection outside countries territory. It's going back to the idea that you can create a legal black hole where there is no applicable law." There were signs emerging on Wednesday that America may have been making ground in pressing the Brazilians and Germans to back on one of its toughest provisions. In an effort to address the concerns of the U.S. and its allies, Brazil and Germany agreed to soften the language suggesting that mass surveillance may constitute a violation of human rights. Instead, it simply deep "concern at the negative impact" that extraterritorial surveillance "may have on the exercise of and enjoyment of human rights." The U.S., however, has not yet indicated it would support the revised proposal.
  • The concession "is regrettable. But it’s not the end of the battle by any means," said Human Rights Watch’s PoKempner. She added that there will soon be another opportunity to corral America's spies: a U.N. discussion on possible human rights violations as a result of extraterritorial surveillance will soon be taken up by the U.N. High commissioner.
  •  
    Woo-hoo! Go get'em, U.N.
Paul Merrell

Safe Plurality: Can it be done using OOXML's Markup Compatibility and Extensions mechan... - 0 views

  • During the OOXML standardization proceedings, the ISO particpants felt that there was one particular sub-technology, Markup Compatibility and Extensibility (MCE), that was potentially of such usefulness by other standards, that it was brought out into its own part. It is now IS29500:2009 Part 3: you can download it in its ECMA form here, it only has about 15 pages of substantive text. The particular issue that MCE address is this: what is an application supposed to do when it finds some markup it wasn't programmed to accept? This could be extension elements in some foreign namespace, but it could also be some elements from a known namespace: the case when a document was made against a newer version of the standard than the application.
  •  
    Rick Jelliffe posts a frank view of the OOXML compatibility framework, a document I've studied myself in the past. There is much that is laudable about the framework, but there are also aspects that are troublesome. Jelliffe identifies one red flag item, the freedom for a vendor to "proprietize" OOXML using the MustUnderstand attribute and offers some suggestions for lessening that danger through redrafting of the spec. One issue he does not touch, however, is the Microsoft Open Specification Promise covenant not to sue, a deeply flawed document in terms of anyone implementing OOXML other than Microsoft. Still, there is so much prior art for the OOXML compatibility framework that I doubt any patent reading on it would survive judicial review. E.g., a highly similar framework has been implemented in WordPerfect since version 6.0. and the OOXML framework is remarkably similar to the compatibility framework specified by OASIS OpenDocument 1.0 but subsequently gutted at ISO. The Jelliffe article offers a good overview of factors that must be considered in designing a standard's compatibility framework. For those that go on to read the compatibility framework's specification, keep in mind that in several places the document falsely claims that it is an interoperability framework. It is not. It is a framework designed for one-way transfer of data, not interoperability which involves round-trip 2-way of exchange of data without data loss.
Gary Edwards

The real reason Google is making Chrome | Computerworld Blogs - 0 views

  •  
    Good analysis by Stephen Vaughan-Nichols. He gets it right. Sort of. Stephen believes that Chrome is desinged to kill MSOffice. Maybe, but i think it's way too late for that. IMHO, Chrome is designed to keep Google and the Open Web in the game. A game that Microsoft is likely to run away with. Microsoft has built an easy to use transiton bridge form MSOffice desktop centric "client/server" computing model to a Web centirc but proprietary RiA-WebStack-Cloud model. In short, there is an on going great transtion of traditional client/server apps to an emerging model we might call client/ WebStack-Cloud-RiA /server computing model. As the world shifts from a Web document model to one driven by Web Applications, there is i believe a complimentary shift towards the advantage Micorsoft holds via the desktop "client/server" monopoly. For Microsoft, this is just a transtion. Painful from a monopolist profitability view point - but unavoidably necessary. The transition is no doubt helped by the OOXML <> XAML "Fixed/flow" Silverlight ready conversion component. MS also has a WebStack-Cloud (Mesh) story that has become an unstoppable juggernaut (Exchange/SharePoint/SQL Server as the WebSTack). WebKit based RiA challengers like Adobe Apollo, Google Chrome, and Apple SproutCore-Cocoa have to figure out how to crack into the great transition. MS has succeeded in protecting their MSOffice monopoly until such time as they had all the transtion pieces in place. They have a decided advantage here. It's also painfully obvious that the while the WebKit guys have incredible innovation on their side, they are still years behind the complete desktop to WebStack-RiA-Cloud to device to legacy servers application story Microsoft is now selling into the marketplace. They also are seriously lacking in developer tools. Still, the future of the Open Web hangs in the balance. Rather than trying to kill MSOffice, i would think a better approach would be that of trying to
  •  
    There are five reasons why Google is doing this, and, if you read the comic book closely - yes, I'm serious - and you know technology you can see the reasons for yourself. These, in turn, lead to what I think is Google's real goal for Chrome.
  •  
    I'm still keeping the door open on a suspicion that Microsoft may have planned to end the life of MS Office after the new fortress on the server side is ready. The code base is simply too brittle to have a competitive future in the feature wars. I can't get past my belief that if Microsoft saw any future in the traditional client-side office suite, it would have been building a new one a decade ago. Too many serious bugs too deeply buried in spaghetti code to fix; it's far easier to rebuild from the ground up. Word dates to 1984, Excel to 1985, Powerpoint to 1987, All were developed for the Mac, ported years later to Windows. At least Word is still running a deeply flawed 16-bit page layout engine. E.g., page breaks across subdocuments have been broken since Word 1.0. Technology designed to replace yet still largely defined by its predecessor, the IBM Correcting Selectric electro-mechanical typewriter. Mid-80s stand-alone, non-networked computer technology in the World Wide Web era? Where's the future in software architecture developed two decades ago, before the Connected World? I suspect Office's end is near. Microsoft's problem is migrating their locked-in customers to the new fortress on the server side. The bridge is OOXML. In other words, Google doesn't have to kill Office; Microsoft will do that itself. Giving the old cash cow a face lift and fresh coat of lipstick? That's the surest sign that the old cow's owner is keeping a close eye on prices in the commodity hamburger market while squeezing out the last few buckets of milk.
Gary Edwards

Introduction to OpenCalais | OpenCalais - 0 views

  •  
    "The free OpenCalais service and open API is the fastest way to tag the people, places, facts and events in your content.  It can help you improve your SEO, increase your reader engagement, create search-engine-friendly 'topic hubs' and streamline content operations - saving you time and money. OpenCalais is free to use in both commercial and non-commercial settings, but can only be used on public content (don't run your confidential or competitive company information through it!). OpenCalais does not keep a copy of your content, but it does keep a copy of the metadata it extracts there from. To repeat, OpenCalais is not a private service, and there is no secure, enterprise version that you can buy to operate behind a firewall. It is your responsibility to police the content that you submit, so make sure you are comfortable with our Terms of Service (TOS) before you jump in. You can process up to 50,000 documents per day (blog posts, news stories, Web pages, etc.) free of charge.  If you need to process more than that - say you are an aggregator or a media monitoring service - then see this page to learn about Calais Professional. We offer a very affordable license. OpenCalais' early adopters include CBS Interactive / CNET, Huffington Post, Slate, Al Jazeera, The New Republic, The White House and more. Already more than 30,000 developers have signed up, and more than 50 publishers and 75 entrepreneurs are using the free service to help build their businesses. You can read about the pioneering work of these publishers, entrepreneurs and developers here. To get started, scroll to the bottom section of this page. To build OpenCalais into an existing site or publishing platform (CMS), you will need to work with your developers.  Why OpenCalais Matters The reason OpenCalais - and so-called "Web 3.0" in general (concepts like the Semantic Web, Linked Data, etc.) - are important is that these technologies make it easy to automatically conne
Paul Merrell

European Human Rights Court Deals a Heavy Blow to the Lawfulness of Bulk Surveillance |... - 0 views

  • In a seminal decision updating and consolidating its previous jurisprudence on surveillance, the Grand Chamber of the European Court of Human Rights took a sideways swing at mass surveillance programs last week, reiterating the centrality of “reasonable suspicion” to the authorization process and the need to ensure interception warrants are targeted to an individual or premises. The decision in Zakharov v. Russia — coming on the heels of the European Court of Justice’s strongly-worded condemnation in Schrems of interception systems that provide States with “generalised access” to the content of communications — is another blow to governments across Europe and the United States that continue to argue for the legitimacy and lawfulness of bulk collection programs. It also provoked the ire of the Russian government, prompting an immediate legislative move to give the Russian constitution precedence over Strasbourg judgments. The Grand Chamber’s judgment in Zakharov is especially notable because its subject matter — the Russian SORM system of interception, which includes the installation of equipment on telecommunications networks that subsequently enables the State direct access to the communications transiting through those networks — is similar in many ways to the interception systems currently enjoying public and judicial scrutiny in the United States, France, and the United Kingdom. Zakharov also provides a timely opportunity to compare the differences between UK and Russian law: Namely, Russian law requires prior independent authorization of interception measures, whereas neither the proposed UK law nor the existing legislative framework do.
  • The decision is lengthy and comprises a useful restatement and harmonization of the Court’s approach to standing (which it calls “victim status”) in surveillance cases, which is markedly different from that taken by the US Supreme Court. (Indeed, Judge Dedov’s separate but concurring opinion notes the contrast with Clapper v. Amnesty International.) It also addresses at length issues of supervision and oversight, as well as the role played by notification in ensuring the effectiveness of remedies. (Marko Milanovic discusses many of these issues here.) For the purpose of the ongoing debate around the legitimacy of bulk surveillance regimes under international human rights law, however, three particular conclusions of the Court are critical.
  • The Court took issue with legislation permitting the interception of communications for broad national, military, or economic security purposes (as well as for “ecological security” in the Russian case), absent any indication of the particular circumstances under which an individual’s communications may be intercepted. It said that such broadly worded statutes confer an “almost unlimited degree of discretion in determining which events or acts constitute such a threat and whether that threat is serious enough to justify secret surveillance” (para. 248). Such discretion cannot be unbounded. It can be limited through the requirement for prior judicial authorization of interception measures (para. 249). Non-judicial authorities may also be competent to authorize interception, provided they are sufficiently independent from the executive (para. 258). What is important, the Court said, is that the entity authorizing interception must be “capable of verifying the existence of a reasonable suspicion against the person concerned, in particular, whether there are factual indications for suspecting that person of planning, committing or having committed criminal acts or other acts that may give rise to secret surveillance measures, such as, for example, acts endangering national security” (para. 260). This finding clearly constitutes a significant threshold which a number of existing and pending European surveillance laws would not meet. For example, the existence of individualized reasonable suspicion runs contrary to the premise of signals intelligence programs where communications are intercepted in bulk; by definition, those programs collect information without any consideration of individualized suspicion. Yet the Court was clearly articulating the principle with national security-driven surveillance in mind, and with the knowledge that interception of communications in Russia is conducted by Russian intelligence on behalf of law enforcement agencies.
  • ...6 more annotations...
  • This element of the Grand Chamber’s decision distinguishes it from prior jurisprudence of the Court, namely the decisions of the Third Section in Weber and Saravia v. Germany (2006) and of the Fourth Section in Liberty and Ors v. United Kingdom (2008). In both cases, the Court considered legislative frameworks which enable bulk interception of communications. (In the German case, the Court used the term “strategic monitoring,” while it referred to “more general programmes of surveillance” in Liberty.) In the latter case, the Fourth Section sought to depart from earlier European Commission of Human Rights — the court of first instance until 1998 — decisions which developed the requirements of the law in the context of surveillance measures targeted at specific individuals or addresses. It took note of the Weber decision which “was itself concerned with generalized ‘strategic monitoring’, rather than the monitoring of individuals” and concluded that there was no “ground to apply different principles concerning the accessibility and clarity of the rules governing the interception of individual communications, on the one hand, and more general programmes of surveillance, on the other” (para. 63). The Court in Liberty made no mention of any need for any prior or reasonable suspicion at all.
  • In Weber, reasonable suspicion was addressed only at the post-interception stage; that is, under the German system, bulk intercepted data could be transmitted from the German Federal Intelligence Service (BND) to law enforcement authorities without any prior suspicion. The Court found that the transmission of personal data without any specific prior suspicion, “in order to allow the institution of criminal proceedings against those being monitored” constituted a fairly serious interference with individuals’ privacy rights that could only be remedied by safeguards and protections limiting the extent to which such data could be used (para. 125). (In the context of that case, the Court found that Germany’s protections and restrictions were sufficient.) When you compare the language from these three cases, it would appear that the Grand Chamber in Zakharov is reasserting the requirement for individualized reasonable suspicion, including in national security cases, with full knowledge of the nature of surveillance considered by the Court in its two recent bulk interception cases.
  • The requirement of reasonable suspicion is bolstered by the Grand Chamber’s subsequent finding in Zakharov that the interception authorization (e.g., the court order or warrant) “must clearly identify a specific person to be placed under surveillance or a single set of premises as the premises in respect of which the authorisation is ordered. Such identification may be made by names, addresses, telephone numbers or other relevant information” (para. 264). In making this finding, it references paragraphs from Liberty describing the broad nature of the bulk interception warrants under British law. In that case, it was this description that led the Court to find the British legislation possessed insufficient clarity on the scope or manner of exercise of the State’s discretion to intercept communications. In one sense, therefore, the Grand Chamber seems to be retroactively annotating the Fourth Section’s Liberty decision so that it might become consistent with its decision in Zakharov. Without this revision, the Court would otherwise appear to depart to some extent — arguably, purposefully — from both Liberty and Weber.
  • Finally, the Grand Chamber took issue with the direct nature of the access enjoyed by Russian intelligence under the SORM system. The Court noted that this contributed to rendering oversight ineffective, despite the existence of a requirement for prior judicial authorization. Absent an obligation to demonstrate such prior authorization to the communications service provider, the likelihood that the system would be abused through “improper action by a dishonest, negligent or overly zealous official” was quite high (para. 270). Accordingly, “the requirement to show an interception authorisation to the communications service provider before obtaining access to a person’s communications is one of the important safeguards against abuse by the law-enforcement authorities” (para. 269). Again, this requirement arguably creates an unconquerable barrier for a number of modern bulk interception systems, which rely on the use of broad warrants to authorize the installation of, for example, fiber optic cable taps that facilitate the interception of all communications that cross those cables. In the United Kingdom, the Independent Reviewer of Terrorism Legislation David Anderson revealed in his essential inquiry into British surveillance in 2015, there are only 20 such warrants in existence at any time. Even if these 20 warrants are served on the relevant communications service providers upon the installation of cable taps, the nature of bulk interception deprives this of any genuine meaning, making the safeguard an empty one. Once a tap is installed for the purposes of bulk interception, the provider is cut out of the equation and can no longer play the role the Court found so crucial in Zakharov.
  • The Zakharov case not only levels a serious blow at bulk, untargeted surveillance regimes, it suggests the Grand Chamber’s intention to actively craft European Court of Human Rights jurisprudence in a manner that curtails such regimes. Any suggestion that the Grand Chamber’s decision was issued in ignorance of the technical capabilities or intentions of States and the continued preference for bulk interception systems should be dispelled; the oral argument in the case took place in September 2014, at a time when the Court had already indicated its intention to accord priority to cases arising out of the Snowden revelations. Indeed, the Court referenced such forthcoming cases in the fact sheet it issued after the Zakharov judgment was released. Any remaining doubt is eradicated through an inspection of the multiple references to the Snowden revelations in the judgment itself. In the main judgment, the Court excerpted text from the Director of the European Union Agency for Human Rights discussing Snowden, and in the separate opinion issued by Judge Dedov, he goes so far as to quote Edward Snowden: “With each court victory, with every change in the law, we demonstrate facts are more convincing than fear. As a society, we rediscover that the value of the right is not in what it hides, but in what it protects.”
  • The full implications of the Zakharov decision remain to be seen. However, it is likely we will not have to wait long to know whether the Grand Chamber intends to see the demise of bulk collection schemes; the three UK cases (Big Brother Watch &amp; Ors v. United Kingdom, Bureau of Investigative Journalism &amp; Alice Ross v. United Kingdom, and 10 Human Rights Organisations v. United Kingdom) pending before the Court have been fast-tracked, indicating the Court’s willingness to continue to confront the compliance of bulk collection schemes with human rights law. It is my hope that the approach in Zakharov hints at the Court’s conviction that bulk collection schemes lie beyond the bounds of permissible State surveillance.
Paul Merrell

Archiveteam - 0 views

  • HISTORY IS OUR FUTURE And we've been trashing our history Archive Team is a loose collective of rogue archivists, programmers, writers and loudmouths dedicated to saving our digital heritage. Since 2009 this variant force of nature has caught wind of shutdowns, shutoffs, mergers, and plain old deletions - and done our best to save the history before it's lost forever. Along the way, we've gotten attention, resistance, press and discussion, but most importantly, we've gotten the message out: IT DOESN'T HAVE TO BE THIS WAY. This website is intended to be an offloading point and information depot for a number of archiving projects, all related to saving websites or data that is in danger of being lost. Besides serving as a hub for team-based pulling down and mirroring of data, this site will provide advice on managing your own data and rescuing it from the brink of destruction. Currently Active Projects (Get Involved Here!) Archive Team recruiting Want to code for Archive Team? Here's a starting point.
  • Archive Team is a loose collective of rogue archivists, programmers, writers and loudmouths dedicated to saving our digital heritage. Since 2009 this variant force of nature has caught wind of shutdowns, shutoffs, mergers, and plain old deletions - and done our best to save the history before it's lost forever. Along the way, we've gotten attention, resistance, press and discussion, but most importantly, we've gotten the message out: IT DOESN'T HAVE TO BE THIS WAY. This website is intended to be an offloading point and information depot for a number of archiving projects, all related to saving websites or data that is in danger of being lost. Besides serving as a hub for team-based pulling down and mirroring of data, this site will provide advice on managing your own data and rescuing it from the brink of destruction.
  • Who We Are and how you can join our cause! Deathwatch is where we keep track of sites that are sickly, dying or dead. Fire Drill is where we keep track of sites that seem fine but a lot depends on them. Projects is a comprehensive list of AT endeavors. Philosophy describes the ideas underpinning our work. Some Starting Points The Introduction is an overview of basic archiving methods. Why Back Up? Because they don't care about you. Back Up your Facebook Data Learn how to liberate your personal data from Facebook. Software will assist you in regaining control of your data by providing tools for information backup, archiving and distribution. Formats will familiarise you with the various data formats, and how to ensure your files will be readable in the future. Storage Media is about where to get it, what to get, and how to use it. Recommended Reading links to others sites for further information. Frequently Asked Questions is where we answer common questions.
  •  
    The Archive Team Warrior is a virtual archiving appliance. You can run it to help with the ArchiveTeam archiving efforts. It will download sites and upload them to our archive - and it's really easy to do! The warrior is a virtual machine, so there is no risk to your computer. The warrior will only use your bandwidth and some of your disk space. It will get tasks from and report progress to the Tracker. Basic usage The warrior runs on Windows, OS X and Linux using a virtual machine. You'll need one of: VirtualBox (recommended) VMware workstation/player (free-gratis for personal use) See below for alternative virtual machines Partners with and contributes lots of archives to the Wayback Machine. Here's how you can help by contributing some bandwidth if you run an always-on box with an internet connection.
Paul Merrell

Hey ITU Member States: No More Secrecy, Release the Treaty Proposals | Electronic Front... - 0 views

  • The International Telecommunication Union (ITU) will hold the World Conference on International Telecommunications (WCIT-12) in December in Dubai, an all-important treaty-writing event where ITU Member States will discuss the proposed revisions to the International Telecommunication Regulations (ITR). The ITU is a United Nations agency responsible for international telecom regulation, a bureaucratic, slow-moving, closed regulatory organization that issues treaty-level provisions for international telecommunication networks and services. The ITR, a legally binding international treaty signed by 178 countries, defines the boundaries of ITU’s regulatory authority and provides "general principles" on international telecommunications. However, media reports indicate that some proposed amendments to the ITR—a negotiation that is already well underway—could potentially expand the ITU’s mandate to encompass the Internet.
  • The International Telecommunication Union (ITU) will hold the World Conference on International Telecommunications (WCIT-12) in December in Dubai, an all-important treaty-writing event where ITU Member States will discuss the proposed revisions to the International Telecommunication Regulations (ITR). The ITU is a United Nations agency responsible for international telecom regulation, a bureaucratic, slow-moving, closed regulatory organization that issues treaty-level provisions for international telecommunication networks and services. The ITR, a legally binding international treaty signed by 178 countries, defines the boundaries of ITU’s regulatory authority and provides "general principles" on international telecommunications. However, media reports indicate that some proposed amendments to the ITR—a negotiation that is already well underway—could potentially expand the ITU’s mandate to encompass the Internet. In similar fashion to the secrecy surrounding ACTA and TPP, the ITR proposals are being negotiated in secret, with high barriers preventing access to any negotiating document. While aspiring to be a venue for Internet policy-making, the ITU Member States do not appear to be very open to the idea of allowing all stakeholders (including civil society) to participate. The framework under which the ITU operates does not allow for any form of open participation. Mere access to documents and decision-makers is sold by the ITU to corporate “associate” members at prohibitively high rates. Indeed, the ITU’s business model appears to depend on revenue generation from those seeking to ‘participate’ in its policy-making processes. This revenue-based principle of policy-making is deeply troubling in and of itself, as the objective of policy making should be to reach the best possible outcome.
  • EFF, European Digital Rights, CIPPIC and CDT and a coalition of civil society organizations from around the world are demanding that the ITU Secretary General, the&nbsp; WCIT-12 Council Working Group, and ITU Member States open up the WCIT-12 and the Council working group negotiations, by immediately releasing all the preparatory materials and Treaty proposals. If it affects the digital rights of citizens across the globe, the public needs to know what is going on and deserves to have a say. The Council Working Group is responsible for the preparatory work towards WCIT-12, setting the agenda for and consolidating input from participating governments and Sector Members. We demand full and meaningful participation for civil society in its own right, and without cost, at the Council Working Group meetings and the WCIT on equal footing with all other stakeholders, including participating governments. A transparent, open process that is inclusive of civil society at every stage is crucial to creating sound policy.
  • ...5 more annotations...
  • Civil society has good reason to be concerned regarding an expanded ITU policy-making role. To begin with, the institution does not appear to have high regard for the distributed multi-stakeholder decision making model that has been integral to the development of an innovative, successful and open Internet. In spite of commitments at WSIS to ensure Internet policy is based on input from all relevant stakeholders, the ITU has consistently put the interests of one stakeholder—Governments—above all others. This is discouraging, as some government interests are inconsistent with an open, innovative network. Indeed, the conditions which have made the Internet the powerful tool it is today emerged in an environment where the interests of all stakeholders are given equal footing, and existing Internet policy-making institutions at least aspire, with varying success, to emulate this equal footing. This formula is enshrined in the Tunis Agenda, which was committed to at WSIS in 2005:
  • 83. Building an inclusive development-oriented Information Society will require unremitting multi-stakeholder effort. We thus commit ourselves to remain fully engaged—nationally, regionally and internationally—to ensure sustainable implementation and follow-up of the outcomes and commitments reached during the WSIS process and its Geneva and Tunis phases of the Summit. Taking into account the multifaceted nature of building the Information Society, effective cooperation among governments, private sector, civil society and the United Nations and other international organizations, according to their different roles and responsibilities and leveraging on their expertise, is essential. 84. Governments and other stakeholders should identify those areas where further effort and resources are required, and jointly identify, and where appropriate develop, implementation strategies, mechanisms and processes for WSIS outcomes at international, regional, national and local levels, paying particular attention to people and groups that are still marginalized in their access to, and utilization of, ICTs.
  • Indeed, the ITU’s current vision of Internet policy-making is less one of distributed decision-making, and more one of ‘taking control.’ For example, in an interview conducted last June with ITU Secretary General Hamadoun Touré, Russian Prime Minister Vladimir Putin raised the suggestion that the union might take control of the Internet: “We are thankful to you for the ideas that you have proposed for discussion,” Putin told Touré in that conversation. “One of them is establishing international control over the Internet using the monitoring and supervisory capabilities of the International Telecommunication Union (ITU).” Perhaps of greater concern are views espoused by the ITU regarding the nature of the Internet. Yesterday, at the World Summit of Information Society Forum,&nbsp;Mr. Alexander Ntoko, head of the Corporate Strategy Division of the ITU, explained the proposals made during the preparatory process for the WCIT, outlining a broad set of topics that can seriously impact people's rights. The categories include "security," "interoperability" and "quality&nbsp;of services," and the possibility that ITU recommendations and regulations will be not only binding on the world’s nations, but enforced.
  • Rights to online expression are unlikely to fare much better than privacy under an ITU model. During last year’s IGF in Kenya, a voluntary code of conduct was issued to further restrict free expression online. A group of nations (including China, the Russian Federation, Tajikistan and Uzbekistan) released a Resolution for the UN General Assembly titled, “International Code of Conduct for Information Security.”&nbsp; The Code seems to be designed to preserve and protect national powers in information and communication. In it, governments pledge to curb “the dissemination of information that incites terrorism, secessionism or extremism or that undermines other countries’ political, economic and social stability, as well as their spiritual and cultural environment.” This overly broad provision accords any state the right to censor or block international communications, for almost any reason.
  • EFF Joins Coalition Denouncing Secretive WCIT Planning Process June 2012 Congressional Witnesses Agree: Multistakeholder Processes Are Right for Internet Regulation June 2012 Widespread Participation Is Key in Internet Governance July 2012 Blogging ITU: Internet Users Will Be Ignored Again if Flawed ITU Proposals Gain Traction June 2012 Global Telecom Governance Debated at European Parliament Workshop
Paul Merrell

German Parliament Says No More Software Patents | Electronic Frontier Foundation - 0 views

  • The German Parliament recently took a huge step that would eliminate software patents (PDF) when it issued a joint motion requiring the German government to ensure that computer programs are only covered by copyright. Put differently, in Germany, software cannot be patented. The Parliament's motion follows a similar announcement made by New Zealand's government last month (PDF), in which it determined that computer programs were not inventions or a manner of manufacture and, thus, cannot be patented.
  • The crux of the German Parliament's motion rests on the fact that software is&nbsp;already&nbsp;protected by copyright, and developers are afforded "exploitation rights." These rights, however, become confused when broad, abstract patents&nbsp;also&nbsp;cover general aspects of computer programs. These two intellectual property systems are at odds. The clearest example of this clash is with free software. The motion recognizes this issue and therefore calls upon the government "to preserve the precedence of copyright law so that software developers can also publish their work under open source license terms and conditions with legal security." The free software movement relies upon the fact that software can be released under a copyright license that allows users to share it and build upon others' works. Patents, as Parliament finds, inhibit this fundamental spread.
  • Just like in the New Zealand order, the German Parliament carved out one type of software that could be patented, when: the computer program serves merely as a replaceable equivalent for a mechanical or electro-mechanical component, as is the case, for instance, when software-based washing machine controls can replace an electromechanical program control unit consisting of revolving cylinders which activate the control circuits for the specific steps of the wash cycle This allows for software that is tied to (and controls part of) another invention to be patented. In other words, if a claimed process is purely a computer program, then it is not patentable. (New Zealand's order uses a similar washing machine example.) The motion ends by calling upon the German government to push for this approach to be standard across all of Europe. We hope policymakers in the United States will also consider fundamental reform that deals with the problems caused by low-quality software patents. Ultimately, any real reform must address this issue.
  •  
    Note that an unofficial translation of the parliamentary motion is linked from the article. This adds substantially to the pressure internationally to end software patents because Germany has been the strongest defender of software patents in Europe. The same legal grounds would not apply in the U.S. The strongest argument for the non-patentability in the U.S., in my opinion, is that software patents embody embody both prior art and obviousness. A general purpose computer can accomplish nothing unforeseen by the prior art of the computing device. And it is impossible for software to do more than cause different sequences of bit register states to be executed. This is the province of "skilled artisans" using known methods to produce predictable results. There is a long line of Supreme Court decisions holding that an "invention" with such traits is non-patentable. I have summarized that argument with citations at . 
Gary Edwards

Apple and Facebook Flash Forward to Computer Memory of the Future | Enterprise | WIRED - 1 views

  •  
    Great story that is at the center of a new cloud computing platform. I met David Flynn back when he was first demonstrating the Realmsys flash card. Extraordinary stuff. He was using the technology to open a secure Linux computing window on an operating Windows XP system. The card opened up a secure data socket, connecting to any Internet Server or Data Server, and running applications on that data - while running Windows and Windows apps in the background. Incredible mesh of Linux, streaming data, and legacy Windows apps. Everytime I find these tech pieces explaining Fusion-io though, I can't help but think that David Flynn is one of the most decent, kind and truly deserving of success people that I have ever met. excerpt: "Apple is spending mountains of money on a new breed of hardware device from a company called Fusion-io. As a public company, Fusion-io is required to disclose information about customers that account for an usually large portion of its revenue, and with its latest annual report, the Salt Lake City outfit reveals that in 2012, at least 25 percent of its revenue - $89.8 million - came from Apple. That's just one figure, from just one company. But it serves as a sign post, showing you where the modern data center is headed. 'There's now a blurring between the storage world and the memory world. People have been enlightened by Fusion-io.' - Gary Gentry Inside a data center like the one Apple operates in Maiden, North Carolina, you'll find thousands of computer servers. Fusion-io makes a slim card that slots inside these machines, and it's packed with hundreds of gigabytes of flash memory, the same stuff that holds all the software and the data on your smartphone. You can think of this card as a much-needed replacement for the good old-fashioned hard disk that typically sits inside a server. Much like a hard disk, it stores information. But it doesn't have any moving parts, which means it's generally more reliable. It c
Paul Merrell

Internet Giants Erect Barriers to Spy Agencies - NYTimes.com - 0 views

  • As fast as it can, Google is sealing up cracks in its systems that Edward J. Snowden revealed the N.S.A. had brilliantly exploited. It is encrypting more data as it moves among its servers and helping customers encode their own emails. Facebook, Microsoft and Yahoo are taking similar steps.
  • After years of cooperating with the government, the immediate goal now is to thwart Washington — as well as Beijing and Moscow. The strategy is also intended to preserve business overseas in places like Brazil and Germany that have threatened to entrust data only to local providers. Google, for example, is laying its own fiber optic cable under the world’s oceans, a project that began as an effort to cut costs and extend its influence, but now has an added purpose: to assure that the company will have more control over the movement of its customer data.
  • A year after Mr. Snowden’s revelations, the era of quiet cooperation is over. Telecommunications companies say they are denying requests to volunteer data not covered by existing law. A.T.&amp;T., Verizon and others say that compared with a year ago, they are far more reluctant to cooperate with the United States government in “gray areas” where there is no explicit requirement for a legal warrant.
  • ...8 more annotations...
  • Eric Grosse, Google’s security chief, suggested in an interview that the N.S.A.'s own behavior invited the new arms race.“I am willing to help on the purely defensive side of things,” he said, referring to Washington’s efforts to enlist Silicon Valley in cybersecurity efforts. “But signals intercept is totally off the table,” he said, referring to national intelligence gathering.“No hard feelings, but my job is to make their job hard,” he added.
  • In Washington, officials acknowledge that covert programs are now far harder to execute because American technology companies, fearful of losing international business, are hardening their networks and saying no to requests for the kind of help they once quietly provided.Continue reading the main story Robert S. Litt, the general counsel of the Office of the Director of National Intelligence, which oversees all 17 American spy agencies, said on Wednesday that it was “an unquestionable loss for our nation that companies are losing the willingness to cooperate legally and voluntarily” with American spy agencies.
  • Many point to an episode in 2012, when Russian security researchers uncovered a state espionage tool, Flame, on Iranian computers. Flame, like the Stuxnet worm, is believed to have been produced at least in part by American intelligence agencies. It was created by exploiting a previously unknown flaw in Microsoft’s operating systems. Companies argue that others could have later taken advantage of this defect.Worried that such an episode undercuts confidence in its wares, Microsoft is now fully encrypting all its products, including Hotmail and Outlook.com, by the end of this year with 2,048-bit encryption, a stronger protection that would take a government far longer to crack. The software is protected by encryption both when it is in data centers and when data is being sent over the Internet, said Bradford L. Smith, the company’s general counsel.
  • Mr. Smith also said the company was setting up “transparency centers” abroad so that technical experts of foreign governments could come in and inspect Microsoft’s proprietary source code. That will allow foreign governments to check to make sure there are no “back doors” that would permit snooping by United States intelligence agencies. The first such center is being set up in Brussels.Microsoft has also pushed back harder in court. In a Seattle case, the government issued a “national security letter” to compel Microsoft to turn over data about a customer, along with a gag order to prevent Microsoft from telling the customer it had been compelled to provide its communications to government officials. Microsoft challenged the gag order as violating the First Amendment. The government backed down.
  • Hardware firms like Cisco, which makes routers and switches, have found their products a frequent subject of Mr. Snowden’s disclosures, and their business has declined steadily in places like Asia, Brazil and Europe over the last year. The company is still struggling to convince foreign customers that their networks are safe from hackers — and free of “back doors” installed by the N.S.A. The frustration, companies here say, is that it is nearly impossible to prove that their systems are N.S.A.-proof.
  • In one slide from the disclosures, N.S.A. analysts pointed to a sweet spot inside Google’s data centers, where they could catch traffic in unencrypted form. Next to a quickly drawn smiley face, an N.S.A. analyst, referring to an acronym for a common layer of protection, had noted, “SSL added and removed here!”
  • Facebook and Yahoo have also been encrypting traffic among their internal servers. And Facebook, Google and Microsoft have been moving to more strongly encrypt consumer traffic with so-called Perfect Forward Secrecy, specifically devised to make it more labor intensive for the N.S.A. or anyone to read stored encrypted communications.One of the biggest indirect consequences from the Snowden revelations, technology executives say, has been the surge in demands from foreign governments that saw what kind of access to user information the N.S.A. received — voluntarily or surreptitiously. Now they want the same.
  • The latest move in the war between intelligence agencies and technology companies arrived this week, in the form of a new Google encryption tool. The company released a user-friendly, email encryption method to replace the clunky and often mistake-prone encryption schemes the N.S.A. has readily exploited.But the best part of the tool was buried in Google’s code, which included a jab at the N.S.A.'s smiley-face slide. The code included the phrase: “ssl-added-and-removed-here-; - )”
Gary Edwards

Flex/Flash: About Singleton, Threads and Flex | Blogging about Software Development - 0 views

  • Flex applications are, like Flash applications, compiled into an SWF file. Once a user visits the webpage containing your Flex application, the SWF file is downloaded to and run from the client computer. Instead of a seperate session each user receives their own copy of your Flex application. The client computer runs the Flash VM, which in turn fires up the local copy of your Flex application. Furthermore, Flex uses the Actionscript scripting language. The current version is Actionscript 3. Actionscript 3 is single-threaded. By now you probably already see where this is going. The single-threaded nature of Flex applications means synchronization is not required.
  •  
    Flex applications are, like Flash applications, compiled into an SWF file. Once a user visits the webpage containing your Flex application, the SWF file is downloaded to and run from the client computer. Instead of a seperate session each user receives their own copy of your Flex application. The client computer runs the Flash VM, which in turn fires up the local copy of your Flex application. Furthermore, Flex uses the Actionscript scripting language. The current version is Actionscript 3. Actionscript 3 is single-threaded. By now you probably already see where this is going. The single-threaded nature of Flex applications means synchronization is not required.
  •  
    Live Roulette from Australia, Fun and Free! Now you can play Real "www.funlivecasino.com.au" Live Roulette for Fun in Australia on a brand new website, FunLiveCasino.com.au. Using the latest internet streaming technologies, Fun Live Casino lets you join a real game happening on a real table in a real casino, all broadcast Live! You can see other real players in the casino betting on the same results you do giving you ultimate trust in the results as they are not generated 'just for you', like other casino gaming products such as 'live studios' or computer generated games. Its amazing to think next time your really in the casino that you might be on camera, and people online might be watching! The future is scary! Imagine that one day soon this will be the only way people would gamble online because the internet is full of scams, you have to be super careful, and why would you play Online Roulette any other way except from a Real Casino you can visit, see, hear and trust! Amazingly this site is completely Free and has no registration process, no spam, no clicks and no fuss. Just Instant Fun "www.funlivecasino.com.au" Free Live Roulette! Give it a try, its worth checking out! "www.funlivecasino.com.au" Australia's Online Fun Live Casino! Backlink created from http://fiverr.com/radjaseotea/making-best-156654-backlink-high-pr
Gary Edwards

The Future of the Desktop - ReadWriteWeb by Nova Spivak - 0 views

  •  
    Excellent commentary from Nova Spivak; about as well thought out a discussion as i've ever seen concerning the future of the desktop. Nova sees the emergence of a WebOS, most likely based on JavaScript. This article set off a fire storm of controversy and discussion, but was quickly lost in the dark days of late August/September of 2008, where news of the subsequent collapse of the world financial system and the fear filled USA elections dominated everything. Too bad. this is great stuff. ..... "Everything is moving to the cloud. As we enter the third decade of the Web we are seeing an increasing shift from native desktop applications towards Web-hosted clones that run in browsers. For example, a range of products such as Microsoft Office Live, Google Docs, Zoho, ThinkFree, DabbleDB, Basecamp, and many others now provide Web-based alternatives to the full range of familiar desktop office productivity apps. The same is true for an increasing range of enterprise applications, led by companies such as Salesforce.com, and this process seems to be accelerating. In addition, hosted remote storage for individuals and enterprises of all sizes is now widely available and inexpensive. As these trends continue, what will happen to the desktop and where will it live?" .... Is the desktop of the future going to just be a web-hosted version of the same old-fashioned desktop metaphors we have today? ..... The desktop of the future is going to be a hosted web service ..... The Browser is Going to Swallow Up the Desktop ...... The focus of the desktop will shift from information to attention ...... Users are going to shift from acting as librarians to acting as daytraders. ...... The Webtop will be more social and will leverage and integrate collective intelligence ....... The desktop of the future is going to have powerful semantic search and social search capabilities built-in ....... Interactive shared spaces will replace folders ....... The Portable Desktop ........ The Sma
Gary Edwards

Apple's extensions: Good or bad for the open web? | Fyrdility - 0 views

  •  
    Fyrdility asks the question; when it comes to the future of the Open Web, is Apple worse than Microsoft? He laments the fact that Apple pushes forward with innovations that have yet to be discussed by the great Web community. Yes, they faithfully submit these extensions and innovations back to the W3C as open standards proposals, but there is no waiting around for discussion or judgement. Apple is on a mission.

    IMHO, what Apple and the WebKit community do is not that much different from the way GPL based open source communities work, except that Apple works without the GPL guarantee. The WebKit innovations and extensions are similar to GPL forks in the shared source code; done in the open, contributed back to the community, with the community responsible for interoperability going forward.

    There are good forks and there are not so good forks. But it's not always a technology-engineering discussion that drives interop. sometimes it's marketshare and user uptake that carry the day. And indeed, this is very much the case with Apple and the WebKit community. The edge of the Web belongs to WebKit and the iPhone. The "forks" to the Open Web source code are going to weigh heavy on concerns for interop with the greater Web.

    One thing Fyrdility fails to recognize is the importance of the ACiD3 test to future interop. Discussion is important, but nothing beats the leveling effect of broadly measuring innovation for interop - and doing so without crippling innovation.

    "......Apple is heavily involved in the W3C and WHATWG, where they help define specifications. They are also well-known for implementing many unofficial CSS extensions, which are subsequently submitted for standardization. However, Apple is also known for preventing its representatives from participating in panels such as the annual Browser Wars panels at SXSW, which expresses a much less cooperative position...."
Paul Merrell

FBI Flouts Obama Directive to Limit Gag Orders on National Security Letters - The Inter... - 0 views

  • Despite the post-Snowden spotlight on mass surveillance, the intelligence community’s easiest end-run around the Fourth Amendment since 2001 has been something called a National Security Letter. FBI agents can demand that an Internet service provider, telephone company or financial institution turn over its records on any number of people —&nbsp;without any judicial review whatsoever —&nbsp;simply by writing a letter that says the information is needed for national security purposes. The FBI at one point was cranking out over 50,000 such letters a year; by the latest count, it still issues about 60 a day. The letters look like this:
  • Recipients are legally required to comply —&nbsp;but it doesn’t stop there. They also aren’t allowed to mention the order to anyone, least of all the person whose data is being searched. Ever. That’s because National Security Letters almost always come with eternal gag orders. Here’s that part:
  • That means the NSL process utterly disregards the First Amendment as well. More than a year ago, President Obama announced that he was ordering the Justice Department to terminate gag orders “within a fixed time unless the government demonstrates a real need for further secrecy.” And on Feb. 3, when the Office of the Director of National Intelligence announced a handful of baby steps resulting from its “comprehensive effort to examine and enhance [its] privacy and civil liberty protections” one of the most concrete was —&nbsp;finally —&nbsp;to cap the gag orders: In response to the President’s new direction, the FBI will now presumptively terminate National Security Letter nondisclosure orders at the earlier of three years after the opening of a fully predicated investigation or the investigation’s close. Continued nondisclosures orders beyond this period are permitted only if a Special Agent in Charge or a Deputy Assistant Director determines that the statutory standards for nondisclosure continue to be satisfied and that the case agent has justified, in writing, why continued nondisclosure is appropriate.
  • ...6 more annotations...
  • Despite the use of the word “now” in that first sentence, however, the FBI has yet to do any such thing. It has not announced any such change, nor explained how it will implement it, or when. Media inquiries were greeted with stalling and, finally, a no comment —&nbsp;ostensibly on advice of legal counsel. “There is pending litigation that deals with a lot of the same questions you’re asking, out of the Ninth Circuit,” FBI spokesman Chris Allen told me. “So for now, we’ll just have to decline to comment.” FBI lawyers are working on a court filing for that case, and “it will address” the new policy, he said. He would not say when to expect it.
  • There is indeed a significant case currently before the federal appeals court in San Francisco. Oral arguments were in October. A decision could come any time. But in that case, the Electronic Frontier Foundation (EFF), which is representing two unnamed communications companies that received NSLs, is calling for the entire NSL statute to be thrown out as unconstitutional&nbsp;— not for a tweak to the gag. And it has a March 2013 district court ruling in its favor. “The gag is a prior restraint under the First Amendment, and prior restraints have to meet an extremely high burden,” said Andrew Crocker, a legal fellow at EFF. That means going to court and meeting the burden of proof —&nbsp;not just signing a letter. Or as the Cato Institute’s Julian Sanchez put it, “To have such a low bar for denying persons or companies the right to speak about government orders they have been served with is anathema. And it is not very good for accountability.”
  • In a separate case, a wide range of media companies (including First Look Media, the non-profit digital media venture that produces The Intercept) are supporting a lawsuit filed by Twitter, demanding the right to say specifically how many NSLs it has received. But simply releasing companies from a gag doesn’t assure the kind of accountability that privacy advocates are saying is required by the Constitution. “What the public has to remember is a NSL is asking for your information, but it’s not asking it from you,” said Michael German, a former FBI agent who is now a fellow with the Brennan Center for Justice. “The vast majority of these things go to the very large telecommunications and financial companies who have a large stake in maintaining a good relationship with the government because they’re heavily regulated entities.”
  • So, German said, “the number of NSLs that would be exposed as a result of the release of the gag order is probably very few. The person whose records are being obtained is the one who should receive some notification.” A time limit on gags going forward also raises the question of whether past gag orders will now be withdrawn. “Obviously there are at this point literally hundreds of thousands of National Security Letters that are more than three years old,” said Sanchez. Individual review is therefore unlikely, but there ought to be some recourse, he said. And the further back you go, “it becomes increasingly implausible that a significant percentage of those are going to entail some dire national security risk.” The NSL program has a troubled history. The absolute secrecy of the program and resulting lack of accountability led to systemic abuse as documented by repeated inspector-general investigations, including improperly authorized NSLs, factual misstatements in the NSLs, improper requests under NSL statutes, requests for information based on First Amendment protected activity, “after-the-fact” blanket NSLs to “cover” illegal requests, and hundreds of NSLs for “community of interest” or “calling circle” information without any determination that the telephone numbers were relevant to authorized national security investigations.
  • Obama’s own hand-selected “Review Group on Intelligence and Communications Technologies” recommended in December 2013 that NSLs should only be issued after judicial review&nbsp;— just like warrants —&nbsp;and that any gag should end within 180 days barring judicial re-approval. But FBI director James Comey objected to the idea, calling NSLs “a very important tool that is essential to the work we do.” His argument evidently prevailed with Obama.
  • NSLs have managed to stay largely under the American public’s radar. But, Crocker says, “pretty much every time I bring it up and give the thumbnail, people are shocked. Then you go into how many are issued every year, and they go crazy.” Want to send me your old NSL and see if we can set a new precedent? Here’s how to reach me. And here’s how to leak to me.
Paul Merrell

Most Agencies Falling Short on Mandate for Online Records - 1 views

  • Nearly 20 years after Congress passed the Electronic Freedom of Information Act Amendments (E-FOIA), only 40 percent of agencies have followed the law's instruction for systematic posting of records released through FOIA in their electronic reading rooms, according to a new FOIA Audit released today by the National Security Archive at www.nsarchive.org to mark Sunshine Week. The Archive team audited all federal agencies with Chief FOIA Officers as well as agency components that handle more than 500 FOIA requests a year — 165 federal offices in all — and found only 67 with online libraries populated with significant numbers of released FOIA documents and regularly updated.
  • Congress called on agencies to embrace disclosure and the digital era nearly two decades ago, with the passage of the 1996 "E-FOIA" amendments. The law mandated that agencies post key sets of records online, provide citizens with detailed guidance on making FOIA requests, and use new information technology to post online proactively records of significant public interest, including those already processed in response to FOIA requests and "likely to become the subject of subsequent requests." Congress believed then, and openness advocates know now, that this kind of proactive disclosure, publishing online the results of FOIA requests as well as agency records that might be requested in the future, is the only tenable solution to FOIA backlogs and delays. Thus the National Security Archive chose to focus on the e-reading rooms of agencies in its latest audit. Even though the majority of federal agencies have not yet embraced proactive disclosure of their FOIA releases, the Archive E-FOIA Audit did find that some real "E-Stars" exist within the federal government, serving as examples to lagging agencies that technology can be harnessed to create state-of-the art FOIA platforms. Unfortunately, our audit also found "E-Delinquents" whose abysmal web performance recalls the teletype era.
  • E-Delinquents include the Office of Science and Technology Policy at the White House, which, despite being mandated to advise the President on technology policy, does not embrace 21st century practices by posting any frequently requested records online. Another E-Delinquent, the Drug Enforcement Administration, insults its website's viewers by claiming that it "does not maintain records appropriate for FOIA Library at this time."
  • ...9 more annotations...
  • "The presumption of openness requires the presumption of posting," said Archive director Tom Blanton. "For the new generation, if it's not online, it does not exist." The National Security Archive has conducted fourteen FOIA Audits since 2002. Modeled after the California Sunshine Survey and subsequent state "FOI Audits," the Archive's FOIA Audits use open-government laws to test whether or not agencies are obeying those same laws. Recommendations from previous Archive FOIA Audits have led directly to laws and executive orders which have: set explicit customer service guidelines, mandated FOIA backlog reduction, assigned individualized FOIA tracking numbers, forced agencies to report the average number of days needed to process requests, and revealed the (often embarrassing) ages of the oldest pending FOIA requests. The surveys include:
  • The federal government has made some progress moving into the digital era. The National Security Archive's last E-FOIA Audit in 2007, " File Not Found," reported that only one in five federal agencies had put online all of the specific requirements mentioned in the E-FOIA amendments, such as guidance on making requests, contact information, and processing regulations. The new E-FOIA Audit finds the number of agencies that have checked those boxes is now much higher — 100 out of 165 — though many (66 in 165) have posted just the bare minimum, especially when posting FOIA responses. An additional 33 agencies even now do not post these types of records at all, clearly thwarting the law's intent.
  • The FOIAonline Members (Department of Commerce, Environmental Protection Agency, Federal Labor Relations Authority, Merit Systems Protection Board, National Archives and Records Administration, Pension Benefit Guaranty Corporation, Department of the Navy, General Services Administration, Small Business Administration, U.S. Citizenship and Immigration Services, and Federal Communications Commission) won their "E-Star" by making past requests and releases searchable via FOIAonline. FOIAonline also allows users to submit their FOIA requests digitally.
  • THE E-DELINQUENTS: WORST OVERALL AGENCIES In alphabetical order
  • Key Findings
  • Excuses Agencies Give for Poor E-Performance
  • Justice Department guidance undermines the statute. Currently, the FOIA stipulates that documents "likely to become the subject of subsequent requests" must be posted by agencies somewhere in their electronic reading rooms. The Department of Justice's Office of Information Policy defines these records as "frequently requested records… or those which have been released three or more times to FOIA requesters." Of course, it is time-consuming for agencies to develop a system that keeps track of how often a record has been released, which is in part why agencies rarely do so and are often in breach of the law. Troublingly, both the current House and Senate FOIA bills include language that codifies the instructions from the Department of Justice. The National Security Archive believes the addition of this "three or more times" language actually harms the intent of the Freedom of Information Act as it will give agencies an easy excuse ("not requested three times yet!") not to proactively post documents that agency FOIA offices have already spent time, money, and energy processing. We have formally suggested alternate language requiring that agencies generally post "all records, regardless of form or format that have been released in response to a FOIA request."
  • Disabilities Compliance. Despite the E-FOIA Act, many government agencies do not embrace the idea of posting their FOIA responses online. The most common reason agencies give is that it is difficult to post documents in a format that complies with the Americans with Disabilities Act, also referred to as being "508 compliant," and the 1998 Amendments to the Rehabilitation Act that require federal agencies "to make their electronic and information technology (EIT) accessible to people with disabilities." E-Star agencies, however, have proven that 508 compliance is no barrier when the agency has a will to post. All documents posted on FOIAonline are 508 compliant, as are the documents posted by the Department of Defense and the Department of State. In fact, every document created electronically by the US government after 1998 should already be 508 compliant. Even old paper records that are scanned to be processed through FOIA can be made 508 compliant with just a few clicks in Adobe Acrobat, according to this Department of Homeland Security guide (essentially OCRing the text, and including information about where non-textual fields appear). Even if agencies are insistent it is too difficult to OCR older documents that were scanned from paper, they cannot use that excuse with digital records.
  • Privacy. Another commonly articulated concern about posting FOIA releases online is that doing so could inadvertently disclose private information from "first person" FOIA requests. This is a valid concern, and this subset of FOIA requests should not be posted online. (The Justice Department identified "first party" requester rights in 1989. Essentially agencies cannot use the b(6) privacy exemption to redact information if a person requests it for him or herself. An example of a "first person" FOIA would be a person's request for his own immigration file.) Cost and Waste of Resources. There is also a belief that there is little public interest in the majority of FOIA requests processed, and hence it is a waste of resources to post them. This thinking runs counter to the governing principle of the Freedom of Information Act: that government information belongs to US citizens, not US agencies. As such, the reason that a person requests information is immaterial as the agency processes the request; the "interest factor" of a document should also be immaterial when an agency is required to post it online. Some think that posting FOIA releases online is not cost effective. In fact, the opposite is true. It's not cost effective to spend tens (or hundreds) of person hours to search for, review, and redact FOIA requests only to mail it to the requester and have them slip it into their desk drawer and forget about it. That is a waste of resources. The released document should be posted online for any interested party to utilize. This will only become easier as FOIA processing systems evolve to automatically post the documents they track. The State Department earned its "E-Star" status demonstrating this very principle, and spent no new funds and did not hire contractors to build its Electronic Reading Room, instead it built a self-sustaining platform that will save the agency time and money going forward.
Paul Merrell

He Was a Hacker for the NSA and He Was Willing to Talk. I Was Willing to Listen. - 2 views

  • he message arrived at night and consisted of three words: “Good evening sir!” The sender was a hacker who had written a series of provocative memos at the National Security Agency. His secret memos had explained — with an earthy use of slang and emojis that was unusual for an operative of the largest eavesdropping organization in the world — how the NSA breaks into the digital accounts of people who manage computer networks, and how it tries to unmask people who use Tor to browse the web anonymously. Outlining some of the NSA’s most sensitive activities, the memos were leaked by Edward Snowden, and I had written about a few of them for The Intercept. There is no Miss Manners for exchanging pleasantries with a man the government has trained to be the digital equivalent of a Navy SEAL. Though I had initiated the contact, I was wary of how he might respond. The hacker had publicly expressed a visceral dislike for Snowden and had accused The Intercept of jeopardizing lives by publishing classified information. One of his memos outlined the ways the NSA reroutes (or “shapes”) the internet traffic of entire countries, and another memo was titled “I Hunt Sysadmins.” I felt sure he could hack anyone’s computer, including mine. Good evening sir!
  • The sender was a hacker who had written a series of provocative memos at the National Security Agency. His secret memos had explained — with an earthy use of slang and emojis that was unusual for an operative of the largest eavesdropping organization in the world — how the NSA breaks into the digital accounts of people who manage computer networks, and how it tries to unmask people who use Tor to browse the web anonymously. Outlining some of the NSA’s most sensitive activities, the memos were leaked by Edward Snowden, and I had written about a few of them for The Intercept. There is no Miss Manners for exchanging pleasantries with a man the government has trained to be the digital equivalent of a Navy SEAL. Though I had initiated the contact, I was wary of how he might respond. The hacker had publicly expressed a visceral dislike for Snowden and had accused The Intercept of jeopardizing lives by publishing classified information. One of his memos outlined the ways the NSA reroutes (or “shapes”) the internet traffic of entire countries, and another memo was titled “I Hunt Sysadmins.” I felt sure he could hack anyone’s computer, including mine.
  • I got lucky with the hacker, because he recently left the agency for the cybersecurity industry; it would be his choice to talk, not the NSA’s. Fortunately, speaking out is his second nature.
  • ...7 more annotations...
  • He agreed to a video chat that turned into a three-hour discussion sprawling from the ethics of surveillance to the downsides of home improvements and the difficulty of securing your laptop.
  • In recent years, two developments have helped make hacking for the government a lot more attractive than hacking for yourself. First, the Department of Justice has cracked down on freelance hacking, whether it be altruistic or malignant. If the DOJ doesn’t like the way you hack, you are going to jail. Meanwhile, hackers have been warmly invited to deploy their transgressive impulses in service to the homeland, because the NSA and other federal agencies have turned themselves into licensed hives of breaking into other people’s computers. For many, it’s a techno sandbox of irresistible delights, according to Gabriella Coleman, a professor at McGill University who studies hackers. “The NSA is a very exciting place for hackers because you have unlimited resources, you have some of the best talent in the world, whether it’s cryptographers or mathematicians or hackers,” she said. “It is just too intellectually exciting not to go there.”
  • The Lamb’s memos on cool ways to hunt sysadmins triggered a strong reaction when I wrote about them in 2014 with my colleague Ryan Gallagher. The memos explained how the NSA tracks down the email and Facebook accounts of systems administrators who oversee computer networks. After plundering their accounts, the NSA can impersonate the admins to get into their computer networks and pilfer the data flowing through them. As the Lamb wrote, “sys admins generally are not my end target. My end target is the extremist/terrorist or government official that happens to be using the network … who better to target than the person that already has the ‘keys to the kingdom’?” Another of his NSA memos, “Network Shaping 101,” used Yemen as a theoretical case study for secretly redirecting the entirety of a country’s internet traffic to NSA servers.
  • “If I turn the tables on you,” I asked the Lamb, “and say, OK, you’re a target for all kinds of people for all kinds of reasons. How do you feel about being a target and that kind of justification being used to justify getting all of your credentials and the keys to your kingdom?” The Lamb smiled. “There is no real safe, sacred ground on the internet,” he replied. “Whatever you do on the internet is an attack surface of some sort and is just something that you live with. Any time that I do something on the internet, yeah, that is on the back of my mind. Anyone from a script kiddie to some random hacker to some other foreign intelligence service, each with their different capabilities — what could they be doing to me?”
  • “You know, the situation is what it is,” he said. “There are protocols that were designed years ago before anybody had any care about security, because when they were developed, nobody was foreseeing that they would be taken advantage of. … A lot of people on the internet seem to approach the problem [with the attitude of] ‘I’m just going to walk naked outside of my house and hope that nobody looks at me.’ From a security perspective, is that a good way to go about thinking? No, horrible … There are good ways to be more secure on the internet. But do most people use Tor? No. Do most people use Signal? No. Do most people use insecure things that most people can hack? Yes. Is that a bash against the intelligence community that people use stuff that’s easily exploitable? That’s a hard argument for me to make.”
  • I mentioned that lots of people, including Snowden, are now working on the problem of how to make the internet more secure, yet he seemed to do the opposite at the NSA by trying to find ways to track and identify people who use Tor and other anonymizers. Would he consider working on the other side of things? He wouldn’t rule it out, he said, but dismally suggested the game was over as far as having a liberating and safe internet, because our laptops and smartphones will betray us no matter what we do with them. “There’s the old adage that the only secure computer is one that is turned off, buried in a box ten feet underground, and never turned on,” he said. “From a user perspective, someone trying to find holes by day and then just live on the internet by night, there’s the expectation [that] if somebody wants to have access to your computer bad enough, they’re going to get it. Whether that’s an intelligence agency or a cybercrimes syndicate, whoever that is, it’s probably going to happen.”
  • There are precautions one can take, and I did that with the Lamb. When we had our video chat, I used a computer that had been wiped clean of everything except its operating system and essential applications. Afterward, it was wiped clean again. My concern was that the Lamb might use the session to obtain data from or about the computer I was using; there are a lot of things he might have tried, if he was in a scheming mood. At the end of our three hours together, I mentioned to him that I had taken these precautions—and he approved. “That’s fair,” he said. “I’m glad you have that appreciation. … From a perspective of a journalist who has access to classified information, it would be remiss to think you’re not a target of foreign intelligence services.” He was telling me the U.S. government should be the least of my worries. He was trying to help me. Documents published with this article: Tracking Targets Through Proxies &amp; Anonymizers Network Shaping 101 Shaping Diagram I Hunt Sys Admins (first published in 2014)
Gary Edwards

Siding with HTML over XHTML, My Decision to Switch - Monday By Noon - 0 views

  • Publishing content on the Web is in no way limited to professional developers or designers, much of the reason the net is so active is because anyone can make a website. Sure, we (as knowledgeable professionals or hobbyists) all hope to make the Web a better place by doing our part in publishing documents with semantically rich, valid markup, but the reality is that those documents are rare. It’s important to keep in mind the true nature of the Internet; an open platform for information sharing.
  • XHTML2 has some very good ideas that I hope can become part of the web. However, it’s unrealistic to think that all web authors will switch to an XML-based syntax which demands that browsers stop processing the document on the first error. XML’s draconian policy was an attempt to clean up the web. This was done around 1996 when lots of invalid content entered the web. CSS took a different approach: instead of demanding that content isn’t processed, we defined rules for how to handle the undefined. It’s called “forward-compatible parsing” and means we can add new constructs without breaking the old. So, I don’t think XHTML is a realistic option for the masses. HTML 5 is it.
    • Gary Edwards
       
      Great quote from CSS expert Hakon Wium Lie.
  • @marbux: Of course i disagree with your interop assessment, but I wondered how it is that you’re missing the point. I think you confuse web applications with legacy desktop – client/server application model. And that confusion leads to the mistake of trying to transfer the desktop document model to one that could adequately service advancing web applications.
  •  
    A CMS expert argues for HTML over XHTML, explaining his reasons for switching. Excellent read! He nails the basics. for similar reasons, we moved from ODF to ePUB and then to CDf and finally to the advanced WebKit document model, where wikiWORD will make it's stand.
  •  
    See also my comment on the same web page that explains why HTML 5 is NOT it for document exchange between web editing applications. .
  •  
    Response to marbux supporting the WebKit layout/document model. Marbux argues that HTML5 is not interoperable, and CSS2 near useless. HTML5 fails regarding the the interop web appplications need. I respond by arguing that the only way to look at web applications is to consider that the browser layout engine is the web application layout engine! Web applications are actually written to the browser layout/document model, OR, to take advantage of browser plug-in capabilities. The interoperability marbux seeks is tied directly to the browser layout engine. In this context, the web format is simply a reflection of that layout engine. If there's an interop problem, it comes from browser madness differentials. The good news is that there are all kinds of efforts to close the browser gap: including WHATWG - HTML5, CSS3, W3C DOM, JavaScript Libraries, Google GWT (Java to JavaScript), Yahoo GUI, and the my favorite; WebKit. The bad news is that the clock is ticking. Microsoft has pulled the trigger and the great migration of MSOffice client/server systems to the MS WebSTack-Mesh architecture has begun. Key to this transition are the WPF-.NET proprietary formats, protocols and interfaces such as XAML, Silverlight, LINQ, and Smart Tags. New business processes are being written, and old legacy desktop bound processes are being transitioned to this emerging platform. The fight for the Open Web is on, with Microsoft threatening to transtion their entire business desktop monopoly to a Web platfomr they own. ~ge~
Paul Merrell

For sale: Systems that can secretly track where cellphone users go around the globe - T... - 0 views

  • Makers of surveillance systems are offering governments across the world the ability to track the movements of almost anybody who carries a cellphone, whether they are blocks away or on another continent. The technology works by exploiting an essential fact of all cellular networks: They must keep detailed, up-to-the-minute records on the locations of their customers to deliver calls and other services to them. Surveillance systems are secretly collecting these records to map people’s travels over days, weeks or longer, according to company marketing documents and experts in surveillance technology.
  • The world’s most powerful intelligence services, such as the National Security Agency and Britain’s GCHQ, long have used cellphone data to track targets around the globe. But experts say these new systems allow less technically advanced governments to track people in any nation — including the United States — with relative ease and precision.
  • It is unclear which governments have acquired these tracking systems, but one industry official, speaking on the condition of anonymity to share sensitive trade information, said that dozens of countries have bought or leased such technology in recent years. This rapid spread underscores how the burgeoning, multibillion-dollar surveillance industry makes advanced spying technology available worldwide. “Any tin-pot dictator with enough money to buy the system could spy on people anywhere in the world,” said Eric King, deputy director of Privacy International, a London-based activist group that warns about the abuse of surveillance technology. “This is a huge problem.”
  • ...9 more annotations...
  • Security experts say hackers, sophisticated criminal gangs and nations under sanctions also could use this tracking technology, which operates in a legal gray area. It is illegal in many countries to track people without their consent or a court order, but there is no clear international legal standard for secretly tracking people in other countries, nor is there a global entity with the authority to police potential abuses.
  • tracking systems that access carrier location databases are unusual in their ability to allow virtually any government to track people across borders, with any type of cellular phone, across a wide range of carriers — without the carriers even knowing. These systems also can be used in tandem with other technologies that, when the general location of a person is already known, can intercept calls and Internet traffic, activate microphones, and access contact lists, photos and other documents. Companies that make and sell surveillance technology seek to limit public information about their systems’ capabilities and client lists, typically marketing their technology directly to law enforcement and intelligence services through international conferences that are closed to journalists and other members of the public.
  • Yet marketing documents obtained by The Washington Post show that companies are offering powerful systems that are designed to evade detection while plotting movements of surveillance targets on computerized maps. The documents claim system success rates of more than 70 percent. A 24-page marketing brochure for SkyLock, a cellular tracking system sold by Verint, a maker of analytics systems based in Melville, N.Y., carries the subtitle “Locate. Track. Manipulate.” The document, dated January 2013 and labeled “Commercially Confidential,” says the system offers government agencies “a cost-effective, new approach to obtaining global location information concerning known targets.”
  • (Privacy International has collected several marketing brochures on cellular surveillance systems, including one that refers briefly to SkyLock, and posted them on its Web site. The 24-page SkyLock brochure and other material was independently provided to The Post by people concerned that such systems are being abused.)
  • Verint, which also has substantial operations in Israel, declined to comment for this story. It says in the marketing brochure that it does not use SkyLock against U.S. or Israeli phones, which could violate national laws. But several similar systems, marketed in recent years by companies based in Switzerland, Ukraine and elsewhere, likely are free of such limitations.
  • The tracking technology takes advantage of the lax security of SS7, a global network that cellular carriers use to communicate with one another when directing calls, texts and Internet data. The system was built decades ago, when only a few large carriers controlled the bulk of global phone traffic. Now thousands of companies use SS7 to provide services to billions of phones and other mobile devices, security experts say. All of these companies have access to the network and can send queries to other companies on the SS7 system, making the entire network more vulnerable to exploitation. Any one of these companies could share its access with others, including makers of surveillance systems.
  • Companies that market SS7 tracking systems recommend using them in tandem with “IMSI catchers,” increasingly common surveillance devices that use cellular signals collected directly from the air to intercept calls and Internet traffic, send fake texts, install spyware on a phone, and determine precise locations. IMSI catchers — also known by one popular trade name, StingRay — can home in on somebody a mile or two away but are useless if a target’s general location is not known. SS7 tracking systems solve that problem by locating the general area of a target so that IMSI catchers can be deployed effectively. (The term “IMSI” refers to a unique identifying code on a cellular phone.)
  • Verint can install SkyLock on the networks of cellular carriers if they are cooperative — something that telecommunications experts say is common in countries where carriers have close relationships with their national governments. Verint also has its own “worldwide SS7 hubs” that “are spread in various locations around the world,” says the brochure. It does not list prices for the services, though it says that Verint charges more for the ability to track targets in many far-flung countries, as opposed to only a few nearby ones. Among the most appealing features of the system, the brochure says, is its ability to sidestep the cellular operators that sometimes protect their users’ personal information by refusing government requests or insisting on formal court orders before releasing information.
  • Another company, Defentek, markets a similar system called Infiltrator Global Real-Time Tracking System on its Web site, claiming to “locate and track any phone number in the world.” The site adds: “It is a strategic solution that infiltrates and is undetected and unknown by the network, carrier, or the target.”
  •  
    The Verint company has very close ties to the Iraeli government. Its former parent company Comverse, was heavily subsidized by Israel and the bulk of its manufacturing and code development was done in Israel. See https://en.wikipedia.org/wiki/Comverse_Technology "In December 2001, a Fox News report raised the concern that wiretapping equipment provided by Comverse Infosys to the U.S. government for electronic eavesdropping may have been vulnerable, as these systems allegedly had a back door through which the wiretaps could be intercepted by unauthorized parties.[55] Fox News reporter Carl Cameron said there was no reason to believe the Israeli government was implicated, but that "a classified top-secret investigation is underway".[55] A March 2002 story by Le Monde recapped the Fox report and concluded: "Comverse is suspected of having introduced into its systems of the 'catch gates' in order to 'intercept, record and store' these wire-taps. This hardware would render the 'listener' himself 'listened to'."[56] Fox News did not pursue the allegations, and in the years since, there have been no legal or commercial actions of any type taken against Comverse by the FBI or any other branch of the US Government related to data access and security issues. While no real evidence has been presented against Comverse or Verint, the allegations have become a favorite topic of conspiracy theorists.[57] By 2005, the company had $959 million in sales and employed over 5,000 people, of whom about half were located in Israel.[16]" Verint is also the company that got the Dept. of Homeland Security contract to provide and install an electronic and video surveillance system across the entire U.S. border with Mexico.  One need not be much of a conspiracy theorist to have concerns about Verint's likely interactions and data sharing with the NSA and its Israeli equivalent, Unit 8200. 
‹ Previous 21 - 40 of 1999 Next › Last »
Showing 20 items per page