Skip to main content

Home/ Future of the Web/ Group items tagged ID

Rss Feed Group items tagged

Paul Merrell

How to Encrypt the Entire Web for Free - The Intercept - 0 views

  • If we’ve learned one thing from the Snowden revelations, it’s that what can be spied on will be spied on. Since the advent of what used to be known as the World Wide Web, it has been a relatively simple matter for network attackers—whether it’s the NSA, Chinese intelligence, your employer, your university, abusive partners, or teenage hackers on the same public WiFi as you—to spy on almost everything you do online. HTTPS, the technology that encrypts traffic between browsers and websites, fixes this problem—anyone listening in on that stream of data between you and, say, your Gmail window or bank’s web site would get nothing but useless random characters—but is woefully under-used. The ambitious new non-profit Let’s Encrypt aims to make the process of deploying HTTPS not only fast, simple, and free, but completely automatic. If it succeeds, the project will render vast regions of the internet invisible to prying eyes.
  • Encryption also prevents attackers from tampering with or impersonating legitimate websites. For example, the Chinese government censors specific pages on Wikipedia, the FBI impersonated The Seattle Times to get a suspect to click on a malicious link, and Verizon and AT&T injected tracking tokens into mobile traffic without user consent. HTTPS goes a long way in preventing these sorts of attacks. And of course there’s the NSA, which relies on the limited adoption of HTTPS to continue to spy on the entire internet with impunity. If companies want to do one thing to meaningfully protect their customers from surveillance, it should be enabling encryption on their websites by default.
  • Let’s Encrypt, which was announced this week but won’t be ready to use until the second quarter of 2015, describes itself as “a free, automated, and open certificate authority (CA), run for the public’s benefit.” It’s the product of years of work from engineers at Mozilla, Cisco, Akamai, Electronic Frontier Foundation, IdenTrust, and researchers at the University of Michigan. (Disclosure: I used to work for the Electronic Frontier Foundation, and I was aware of Let’s Encrypt while it was being developed.) If Let’s Encrypt works as advertised, deploying HTTPS correctly and using all of the best practices will be one of the simplest parts of running a website. All it will take is running a command. Currently, HTTPS requires jumping through a variety of complicated hoops that certificate authorities insist on in order prove ownership of domain names. Let’s Encrypt automates this task in seconds, without requiring any human intervention, and at no cost.
  • ...2 more annotations...
  • The benefits of using HTTPS are obvious when you think about protecting secret information you send over the internet, like passwords and credit card numbers. It also helps protect information like what you search for in Google, what articles you read, what prescription medicine you take, and messages you send to colleagues, friends, and family from being monitored by hackers or authorities. But there are less obvious benefits as well. Websites that don’t use HTTPS are vulnerable to “session hijacking,” where attackers can take over your account even if they don’t know your password. When you download software without encryption, sophisticated attackers can secretly replace the download with malware that hacks your computer as soon as you try installing it.
  • The transition to a fully encrypted web won’t be immediate. After Let’s Encrypt is available to the public in 2015, each website will have to actually use it to switch over. And major web hosting companies also need to hop on board for their customers to be able to take advantage of it. If hosting companies start work now to integrate Let’s Encrypt into their services, they could offer HTTPS hosting by default at no extra cost to all their customers by the time it launches.
  •  
    Don't miss the video. And if you have a web site, urge your host service to begin preparing for Let's Encrypt. (See video on why it's good for them.)
Paul Merrell

Guest Post: NSA Reform - The Consequences of Failure | Just Security - 0 views

  • In the absence of real reform, people and institutions at home and abroad are taking matters into their own hands. In America, the NSA’s overreach is changing the way we communicate with and relate to each other. In order to evade government surveillance, more and more Americans are employing encryption technology.  The veritable explosion of new secure messaging apps like Surespot, OpenWhisper’s collaboration with WhatsApp, the development and deployment of open source anti-surveillance tools like Detekt, the creation of organizationally-sponsored “surveillance self-defense” guides, the push to universalize the https protocol, anti-surveillance book events featuring free encryption workshops— are manifestations of the rise of the personal encryption and pro-privacy digital resistance movement. Its political implications are clear: Americans, along with people around the world, increasingly see the United States government’s overreaching surveillance activities as a threat to be blocked.
  • The federal government’s vacuum-cleaner approach to surveillance—manifested in Title II of the PATRIOT Act, the FISA Amendments Act, and EO 12333—has backfired in these respects, and the emergence of this digital resistance movement is one result. Indeed, the existence and proliferation of social networks hold the potential to help this movement spread faster and to more of the general public than would have been possible in decades past. This is evidenced by the growing concern worldwide about governments’ ability to access reams of information about people’s lives with relative ease. As one measure, compared to a year ago, 41% of online users in North America now avoid certain Internet sites and applications, 16% change who they communicate with, and 24% censor what they say online. Those numbers, if anywhere close to accurate, are a major concern for democratic society.
  • Even if commercially available privacy technology proves capable of providing a genuine shield against warrantless or otherwise illegal surveillance by the United States government, it will remain a treatment for the symptom, not a cure for the underlying legal and constitutional malady. In April 2014, a Harris poll of US adults showed that in response to the Snowden revelations, “Almost half of respondents (47%) said that they have changed their online behavior and think more carefully about where they go, what they say, and what they do online.” Set aside for a moment that just the federal government’s collection of the data of innocent Americans is itself likely a violation of the Fourth Amendment. The Harris poll is just one of numerous studies highlighting the collateral damage to American society and politics from NSA’s excesses: segments of our population are now fearful of even associating with individuals or organizations executive branch officials deem controversial or suspicious. Nearly half of Americans say they have changed their online behavior out of a fear of what the federal government might do with their personal information. The Constitution’s free association guarantee has been damaged by the Surveillance State’s very operation.
  • ...1 more annotation...
  • The failure of the Congress and the courts to end the surveillance state, despite the repeated efforts by a huge range of political and public interest actors to effect that change through the political process, is only fueling the growing resistance movement. Federal officials understand this, which is why they are trying—desperately and in the view of some, underhandedly—to shut down this digital resistance movement. This action/reaction cycle is exactly what it appears to be: an escalating conflict between the American public and its government. Without comprehensive surveillance authority reforms (including a journalist “shield law” and ironclad whistleblower protections for Intelligence Community contractors) that are verifiable and enforceable, that conflict will only continue.
Paul Merrell

BBC News - GCHQ's Robert Hannigan says tech firms 'in denial' on extremism - 0 views

  • Web giants such as Twitter, Facebook and WhatsApp have become "command-and-control networks... for terrorists and criminals", GCHQ's new head has said. Islamic State extremists had "embraced" the web but some companies remained "in denial" over the problem, Robert Hannigan wrote in the Financial Times. He called for them to do more to co-operate with security services. However, civil liberties campaigners said the companies were already working with the intelligence agencies. None of the major tech firms has yet responded to Mr Hannigan's comments.
  • GCHQ, terrorists, and the internet: what are the issues? GCHQ v tech firms: Internet reacts Change at the top for Britain's
  • Mr Hannigan said IS had "embraced the web as a noisy channel in which to promote itself, intimidate people, and radicalise new recruits." The "security of its communications" added another challenge to agencies such as GCHQ, he said - adding that techniques for encrypting - or digitally scrambling - messages "which were once the preserve of the most sophisticated criminals or nation states now come as standard". GCHQ and its sister agencies, MI5 and the Secret Intelligence Service, could not tackle these challenges "at scale" without greater support from the private sector, including the largest US technology companies which dominate the web, he wrote.
  •  
    What I want to know is what we're going to do with that NSA data center at Bluffdale, Utah, after the NSA is abolished? Maybe give it to the Internet Archive?
Gonzalo San Gil, PhD.

Judge: IP-Address Doesn't Identify a Movie Pirate | TorrentFreak - 1 views

  •  
    "In a prominent ruling Florida District Court Judge Ursula Ungaro refused to issue a subpoena, arguing that IP-address evidence is not enough to show who has downloaded a pirated movie."
  •  
    "In a prominent ruling Florida District Court Judge Ursula Ungaro refused to issue a subpoena, arguing that IP-address evidence is not enough to show who has downloaded a pirated movie."
Paul Merrell

The NSA's SKYNET program may be killing thousands of innocent people | Ars Technica UK - 0 views

  • The NSA’s SKYNET program may be killing thousands of innocent people "Ridiculously optimistic" machine learning algorithm is "completely bullshit," says expert.
  •  
    Gack! We have lunatics running our government. 
Paul Merrell

Cy Vance's Proposal to Backdoor Encrypted Devices Is Riddled With Vulnerabilities | Jus... - 0 views

  • Less than a week after the attacks in Paris — while the public and policymakers were still reeling, and the investigation had barely gotten off the ground — Cy Vance, Manhattan’s District Attorney, released a policy paper calling for legislation requiring companies to provide the government with backdoor access to their smartphones and other mobile devices. This is the first concrete proposal of this type since September 2014, when FBI Director James Comey reignited the “Crypto Wars” in response to Apple’s and Google’s decisions to use default encryption on their smartphones. Though Comey seized on Apple’s and Google’s decisions to encrypt their devices by default, his concerns are primarily related to end-to-end encryption, which protects communications that are in transit. Vance’s proposal, on the other hand, is only concerned with device encryption, which protects data stored on phones. It is still unclear whether encryption played any role in the Paris attacks, though we do know that the attackers were using unencrypted SMS text messages on the night of the attack, and that some of them were even known to intelligence agencies and had previously been under surveillance. But regardless of whether encryption was used at some point during the planning of the attacks, as I lay out below, prohibiting companies from selling encrypted devices would not prevent criminals or terrorists from being able to access unbreakable encryption. Vance’s primary complaint is that Apple’s and Google’s decisions to provide their customers with more secure devices through encryption interferes with criminal investigations. He claims encryption prevents law enforcement from accessing stored data like iMessages, photos and videos, Internet search histories, and third party app data. He makes several arguments to justify his proposal to build backdoors into encrypted smartphones, but none of them hold water.
  • Before addressing the major privacy, security, and implementation concerns that his proposal raises, it is worth noting that while an increase in use of fully encrypted devices could interfere with some law enforcement investigations, it will help prevent far more crimes — especially smartphone theft, and the consequent potential for identity theft. According to Consumer Reports, in 2014 there were more than two million victims of smartphone theft, and nearly two-thirds of all smartphone users either took no steps to secure their phones or their data or failed to implement passcode access for their phones. Default encryption could reduce instances of theft because perpetrators would no longer be able to break into the phone to steal the data.
  • Vance argues that creating a weakness in encryption to allow law enforcement to access data stored on devices does not raise serious concerns for security and privacy, since in order to exploit the vulnerability one would need access to the actual device. He considers this an acceptable risk, claiming it would not be the same as creating a widespread vulnerability in encryption protecting communications in transit (like emails), and that it would be cheap and easy for companies to implement. But Vance seems to be underestimating the risks involved with his plan. It is increasingly important that smartphones and other devices are protected by the strongest encryption possible. Our devices and the apps on them contain astonishing amounts of personal information, so much that an unprecedented level of harm could be caused if a smartphone or device with an exploitable vulnerability is stolen, not least in the forms of identity fraud and credit card theft. We bank on our phones, and have access to credit card payments with services like Apple Pay. Our contact lists are stored on our phones, including phone numbers, emails, social media accounts, and addresses. Passwords are often stored on people’s phones. And phones and apps are often full of personal details about their lives, from food diaries to logs of favorite places to personal photographs. Symantec conducted a study, where the company spread 50 “lost” phones in public to see what people who picked up the phones would do with them. The company found that 95 percent of those people tried to access the phone, and while nearly 90 percent tried to access private information stored on the phone or in other private accounts such as banking services and email, only 50 percent attempted contacting the owner.
  • ...8 more annotations...
  • Vance attempts to downplay this serious risk by asserting that anyone can use the “Find My Phone” or Android Device Manager services that allow owners to delete the data on their phones if stolen. However, this does not stand up to scrutiny. These services are effective only when an owner realizes their phone is missing and can take swift action on another computer or device. This delay ensures some period of vulnerability. Encryption, on the other hand, protects everyone immediately and always. Additionally, Vance argues that it is safer to build backdoors into encrypted devices than it is to do so for encrypted communications in transit. It is true that there is a difference in the threats posed by the two types of encryption backdoors that are being debated. However, some manner of widespread vulnerability will inevitably result from a backdoor to encrypted devices. Indeed, the NSA and GCHQ reportedly hacked into a database to obtain cell phone SIM card encryption keys in order defeat the security protecting users’ communications and activities and to conduct surveillance. Clearly, the reality is that the threat of such a breach, whether from a hacker or a nation state actor, is very real. Even if companies go the extra mile and create a different means of access for every phone, such as a separate access key for each phone, significant vulnerabilities will be created. It would still be possible for a malicious actor to gain access to the database containing those keys, which would enable them to defeat the encryption on any smartphone they took possession of. Additionally, the cost of implementation and maintenance of such a complex system could be high.
  • Privacy is another concern that Vance dismisses too easily. Despite Vance’s arguments otherwise, building backdoors into device encryption undermines privacy. Our government does not impose a similar requirement in any other context. Police can enter homes with warrants, but there is no requirement that people record their conversations and interactions just in case they someday become useful in an investigation. The conversations that we once had through disposable letters and in-person conversations now happen over the Internet and on phones. Just because the medium has changed does not mean our right to privacy has.
  • In addition to his weak reasoning for why it would be feasible to create backdoors to encrypted devices without creating undue security risks or harming privacy, Vance makes several flawed policy-based arguments in favor of his proposal. He argues that criminals benefit from devices that are protected by strong encryption. That may be true, but strong encryption is also a critical tool used by billions of average people around the world every day to protect their transactions, communications, and private information. Lawyers, doctors, and journalists rely on encryption to protect their clients, patients, and sources. Government officials, from the President to the directors of the NSA and FBI, and members of Congress, depend on strong encryption for cybersecurity and data security. There are far more innocent Americans who benefit from strong encryption than there are criminals who exploit it. Encryption is also essential to our economy. Device manufacturers could suffer major economic losses if they are prohibited from competing with foreign manufacturers who offer more secure devices. Encryption also protects major companies from corporate and nation-state espionage. As more daily business activities are done on smartphones and other devices, they may now hold highly proprietary or sensitive information. Those devices could be targeted even more than they are now if all that has to be done to access that information is to steal an employee’s smartphone and exploit a vulnerability the manufacturer was required to create.
  • Vance also suggests that the US would be justified in creating such a requirement since other Western nations are contemplating requiring encryption backdoors as well. Regardless of whether other countries are debating similar proposals, we cannot afford a race to the bottom on cybersecurity. Heads of the intelligence community regularly warn that cybersecurity is the top threat to our national security. Strong encryption is our best defense against cyber threats, and following in the footsteps of other countries by weakening that critical tool would do incalculable harm. Furthermore, even if the US or other countries did implement such a proposal, criminals could gain access to devices with strong encryption through the black market. Thus, only innocent people would be negatively affected, and some of those innocent people might even become criminals simply by trying to protect their privacy by securing their data and devices. Finally, Vance argues that David Kaye, UN Special Rapporteur for Freedom of Expression and Opinion, supported the idea that court-ordered decryption doesn’t violate human rights, provided certain criteria are met, in his report on the topic. However, in the context of Vance’s proposal, this seems to conflate the concepts of court-ordered decryption and of government-mandated encryption backdoors. The Kaye report was unequivocal about the importance of encryption for free speech and human rights. The report concluded that:
  • States should promote strong encryption and anonymity. National laws should recognize that individuals are free to protect the privacy of their digital communications by using encryption technology and tools that allow anonymity online. … States should not restrict encryption and anonymity, which facilitate and often enable the rights to freedom of opinion and expression. Blanket prohibitions fail to be necessary and proportionate. States should avoid all measures that weaken the security that individuals may enjoy online, such as backdoors, weak encryption standards and key escrows. Additionally, the group of intelligence experts that was hand-picked by the President to issue a report and recommendations on surveillance and technology, concluded that: [R]egarding encryption, the U.S. Government should: (1) fully support and not undermine efforts to create encryption standards; (2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and (3) increase the use of encryption and urge US companies to do so, in order to better protect data in transit, at rest, in the cloud, and in other storage.
  • The clear consensus among human rights experts and several high-ranking intelligence experts, including the former directors of the NSA, Office of the Director of National Intelligence, and DHS, is that mandating encryption backdoors is dangerous. Unaddressed Concerns: Preventing Encrypted Devices from Entering the US and the Slippery Slope In addition to the significant faults in Vance’s arguments in favor of his proposal, he fails to address the question of how such a restriction would be effectively implemented. There is no effective mechanism for preventing code from becoming available for download online, even if it is illegal. One critical issue the Vance proposal fails to address is how the government would prevent, or even identify, encrypted smartphones when individuals bring them into the United States. DHS would have to train customs agents to search the contents of every person’s phone in order to identify whether it is encrypted, and then confiscate the phones that are. Legal and policy considerations aside, this kind of policy is, at the very least, impractical. Preventing strong encryption from entering the US is not like preventing guns or drugs from entering the country — encrypted phones aren’t immediately obvious as is contraband. Millions of people use encrypted devices, and tens of millions more devices are shipped to and sold in the US each year.
  • Finally, there is a real concern that if Vance’s proposal were accepted, it would be the first step down a slippery slope. Right now, his proposal only calls for access to smartphones and devices running mobile operating systems. While this policy in and of itself would cover a number of commonplace devices, it may eventually be expanded to cover laptop and desktop computers, as well as communications in transit. The expansion of this kind of policy is even more worrisome when taking into account the speed at which technology evolves and becomes widely adopted. Ten years ago, the iPhone did not even exist. Who is to say what technology will be commonplace in 10 or 20 years that is not even around today. There is a very real question about how far law enforcement will go to gain access to information. Things that once seemed like merely science fiction, such as wearable technology and artificial intelligence that could be implanted in and work with the human nervous system, are now available. If and when there comes a time when our “smart phone” is not really a device at all, but is rather an implant, surely we would not grant law enforcement access to our minds.
  • Policymakers should dismiss Vance’s proposal to prohibit the use of strong encryption to protect our smartphones and devices in order to ensure law enforcement access. Undermining encryption, regardless of whether it is protecting data in transit or at rest, would take us down a dangerous and harmful path. Instead, law enforcement and the intelligence community should be working to alter their skills and tactics in a fast-evolving technological world so that they are not so dependent on information that will increasingly be protected by encryption.
Paul Merrell

Bulk Collection Under Section 215 Has Ended… What's Next? | Just Security - 0 views

  • The first (and thus far only) roll-back of post-9/11 surveillance authorities was implemented over the weekend: The National Security Agency shuttered its program for collecting and holding the metadata of Americans’ phone calls under Section 215 of the Patriot Act. While bulk collection under Section 215 has ended, the government can obtain access to this information under the procedures specified in the USA Freedom Act. Indeed, some experts have argued that the Agency likely has access to more metadata because its earlier dragnet didn’t cover cell phones or Internet calling. In addition, the metadata of calls made by an individual in the United States to someone overseas and vice versa can still be collected in bulk — this takes place abroad under Executive Order 12333. No doubt the NSA wishes that this was the end of the surveillance reform story and the Paris attacks initially gave them an opening. John Brennan, the Director of the CIA, implied that the attacks were somehow related to “hand wringing” about spying and Sen. Tom Cotton (R-Ark.) introduced a bill to delay the shut down of the 215 program. Opponents of encryption were quick to say: “I told you so.”
  • But the facts that have emerged thus far tell a different story. It appears that much of the planning took place IRL (that’s “in real life” for those of you who don’t have teenagers). The attackers, several of whom were on law enforcement’s radar, communicated openly over the Internet. If France ever has a 9/11 Commission-type inquiry, it could well conclude that the Paris attacks were a failure of the intelligence agencies rather than a failure of intelligence authorities. Despite the passage of the USA Freedom Act, US surveillance authorities have remained largely intact. Section 702 of the FISA Amendments Act — which is the basis of programs like PRISM and the NSA’s Upstream collection of information from Internet cables — sunsets in the summer of 2017. While it’s difficult to predict the political environment that far out, meaningful reform of Section 702 faces significant obstacles. Unlike the Section 215 program, which was clearly aimed at Americans, Section 702 is supposedly targeted at foreigners and only picks up information about Americans “incidentally.” The NSA has refused to provide an estimate of how many Americans’ information it collects under Section 702, despite repeated requests from lawmakers and most recently a large cohort of advocates. The Section 215 program was held illegal by two federal courts (here and here), but civil attempts to challenge Section 702 have run into standing barriers. Finally, while two review panels concluded that the Section 215 program provided little counterterrorism benefit (here and here), they found that the Section 702 program had been useful.
  • There is, nonetheless, some pressure to narrow the reach of Section 702. The recent decision by the European Court of Justice in the safe harbor case suggests that data flows between Europe and the US may be restricted unless the PRISM program is modified to protect the information of Europeans (see here, here, and here for discussion of the decision and reform options). Pressure from Internet companies whose business is suffering — estimates run to the tune of $35 to 180 billion — as a result of disclosures about NSA spying may also nudge lawmakers towards reform. One of the courts currently considering criminal cases which rely on evidence derived from Section 702 surveillance may hold the program unconstitutional either on the basis of the Fourth Amendment or Article III for the reasons set out in this Brennan Center report. A federal district court in Colorado recently rejected such a challenge, although as explained in Steve’s post, the decision did not seriously explore the issues. Further litigation in the European courts too could have an impact on the debate.
  • ...2 more annotations...
  • The US intelligence community’s broadest surveillance authorities are enshrined in Executive Order 12333, which primarily covers the interception of electronic communications overseas. The Order authorizes the collection, retention, and dissemination of “foreign intelligence” information, which includes information “relating to the capabilities, intentions or activities of foreign powers, organizations or persons.” In other words, so long as they are operating outside the US, intelligence agencies are authorized to collect information about any foreign person — and, of course, any Americans with whom they communicate. The NSA has conceded that EO 12333 is the basis of most of its surveillance. While public information about these programs is limited, a few highlights give a sense of the breadth of EO 12333 operations: The NSA gathers information about every cell phone call made to, from, and within the Bahamas, Mexico, Kenya, the Philippines, and Afghanistan, and possibly other countries. A joint US-UK program tapped into the cables connecting internal Yahoo and Google networks to gather e-mail address books and contact lists from their customers. Another US-UK collaboration collected images from video chats among Yahoo users and possibly other webcam services. The NSA collects both the content and metadata of hundreds of millions of text messages from around the world. By tapping into the cables that connect global networks, the NSA has created a database of the location of hundreds of millions of mobile phones outside the US.
  • Given its scope, EO 12333 is clearly critical to those seeking serious surveillance reform. The path to reform is, however, less clear. There is no sunset provision that requires action by Congress and creates an opportunity for exposing privacy risks. Even in the unlikely event that Congress was inclined to intervene, it would have to address questions about the extent of its constitutional authority to regulate overseas surveillance. To the best of my knowledge, there is no litigation challenging EO 12333 and the government doesn’t give notice to criminal defendants when it uses evidence derived from surveillance under the order, so the likelihood of a court ruling is slim. The Privacy and Civil Liberties Oversight Board is currently reviewing two programs under EO 12333, but it is anticipated that much of its report will be classified (although it has promised a less detailed unclassified version as well). While the short-term outlook for additional surveillance reform is challenging, from a longer-term perspective, the distinctions that our law makes between Americans and non-Americans and between domestic and foreign collection cannot stand indefinitely. If the Fourth Amendment is to meaningfully protect Americans’ privacy, the courts and Congress must come to grips with this reality.
Paul Merrell

The All Writs Act, Software Licenses, and Why Judges Should Ask More Questions | Just S... - 0 views

  • Pending before federal magistrate judge James Orenstein is the government’s request for an order obligating Apple, Inc. to unlock an iPhone and thereby assist prosecutors in decrypting data the government has seized and is authorized to search pursuant to a warrant. In an order questioning the government’s purported legal basis for this request, the All Writs Act of 1789 (AWA), Judge Orenstein asked Apple for a brief informing the court whether the request would be technically feasible and/or burdensome. After Apple filed, the court asked it to file a brief discussing whether the government had legal grounds under the AWA to compel Apple’s assistance. Apple filed that brief and the government filed a reply brief last week in the lead-up to a hearing this morning.
  • We’ve long been concerned about whether end users own software under the law. Software owners have rights of adaptation and first sale enshrined in copyright law. But software publishers have claimed that end users are merely licensees, and our rights under copyright law can be waived by mass-market end user license agreements, or EULAs. Over the years, Granick has argued that users should retain their rights even if mass-market licenses purport to take them away. The government’s brief takes advantage of Apple’s EULA for iOS to argue that Apple, the software publisher, is responsible for iPhones around the world. Apple’s EULA states that when you buy an iPhone, you’re not buying the iOS software it runs, you’re just licensing it from Apple. The government argues that having designed a passcode feature into a copy of software which it owns and licenses rather than sells, Apple can be compelled under the All Writs Act to bypass the passcode on a defendant’s iPhone pursuant to a search warrant and thereby access the software owned by Apple. Apple’s supplemental brief argues that in defining its users’ contractual rights vis-à-vis Apple with regard to Apple’s intellectual property, Apple in no way waived its own due process rights vis-à-vis the government with regard to users’ devices. Apple’s brief compares this argument to forcing a car manufacturer to “provide law enforcement with access to the vehicle or to alter its functionality at the government’s request” merely because the car contains licensed software. 
  • This is an interesting twist on the decades-long EULA versus users’ rights fight. As far as we know, this is the first time that the government has piggybacked on EULAs to try to compel software companies to provide assistance to law enforcement. Under the government’s interpretation of the All Writs Act, anyone who makes software could be dragooned into assisting the government in investigating users of the software. If the court adopts this view, it would give investigators immense power. The quotidian aspects of our lives increasingly involve software (from our cars to our TVs to our health to our home appliances), and most of that software is arguably licensed, not bought. Conscripting software makers to collect information on us would afford the government access to the most intimate information about us, on the strength of some words in some license agreements that people never read. (And no wonder: The iPhone’s EULA came to over 300 pages when the government filed it as an exhibit to its brief.)
  • ...1 more annotation...
  • The government’s brief does not acknowledge the sweeping implications of its arguments. It tries to portray its requested unlocking order as narrow and modest, because it “would not require Apple to make any changes to its software or hardware, … [or] to introduce any new ability to access data on its phones. It would simply require Apple to use its existing capability to bypass the passcode on a passcode-locked iOS 7 phone[.]” But that undersells the implications of the legal argument the government is making: that anything a company already can do, it could be compelled to do under the All Writs Act in order to assist law enforcement. Were that the law, the blow to users’ trust in their encrypted devices, services, and products would be little different than if Apple and other companies were legally required to design backdoors into their encryption mechanisms (an idea the government just can’t seem to drop, its assurances in this brief notwithstanding). Entities around the world won’t buy security software if its makers cannot be trusted not to hand over their users’ secrets to the US government. That’s what makes the encryption in iOS 8 and later versions, which Apple has told the court it “would not have the technical ability” to bypass, so powerful — and so despised by the government: Because no matter how broadly the All Writs Act extends, no court can compel Apple to do the impossible.
Paul Merrell

Section 215 and "Fruitless" (?!?) Constitutional Adjudication | Just Security - 0 views

  • This morning, the Second Circuit issued a follow-on ruling to its May decision in ACLU v. Clapper (which had held that the NSA’s bulk telephone records program was unlawful insofar as it had not properly been authorized by Congress). In a nutshell, today’s ruling rejects the ACLU’s request for an injunction against the continued operation of the program for the duration of the 180-day transitional period (which ends on November 29) from the old program to the quite different collection regime authorized by the USA Freedom Act. As the Second Circuit (in my view, quite correctly) concluded, “Regardless of whether the bulk telephone metadata program was illegal prior to May, as we have held, and whether it would be illegal after November 29, as Congress has now explicitly provided, it is clear that Congress intended to authorize it during the transitionary period.” So far, so good. But remember that the ACLU’s challenge to bulk collection was mounted on both statutory and constitutional grounds, the latter of which the Second Circuit was able to avoid in its earlier ruling because of its conclusion that, prior to the enactment of the USA Freedom Act, bulk collection was unauthorized by Congress. Now that it has held that it is authorized during the transitional period, that therefore tees up, quite unavoidably, whether bulk collection violates the Fourth Amendment. But rather than decide that (momentous) question, the Second Circuit ducked:
  • We agree with the government that we ought not meddle with Congress’s considered decision regarding the transition away from bulk telephone metadata collection, and also find that addressing these issues at this time would not be a prudent use of judicial authority. We need not, and should not, decide such momentous constitutional issues based on a request for such narrow and temporary relief. To do so would take more time than the brief transition period remaining for the telephone metadata program, at which point, any ruling on the constitutionality of the demised program would be fruitless. In other words, because any constitutional violation is short-lived, and because it results from the “considered decision” of Congress, it would be fruitless to actually resolve the constitutionality of bulk collection during the transitional period.
  • Hopefully, it won’t take a lot of convincing for folks to understand just how wrong-headed this is. For starters, if the plaintiffs are correct, they are currently being subjected to unconstitutional government surveillance for which they are entitled to a remedy. The fact that this surveillance has a limited shelf-life (and/or that Congress was complicit in it) doesn’t in any way ameliorate the constitutional violation — which is exactly why the Supreme Court has, for generations, recognized an exception to mootness doctrine for constitutional violations that, owing to their short duration, are “capable of repetition, yet evading review.” Indeed, in this very same opinion, the Second Circuit first held that the ACLU’s challenge isn’t moot, only to then invokes mootness-like principles to justify not resolving the constitutional claim. It can’t be both; either the constitutional challenge is moot, or it isn’t. But more generally, the notion that constitutional adjudication of a claim with a short shelf-life is “fruitless” utterly misses the significance of the establishment of forward-looking judicial precedent, especially in a day and age in which courts are allowed to (and routinely do) avoid resolving the merits of constitutional claims in cases in which the relevant precedent is not “clearly established.” Maybe, if this were the kind of constitutional question that was unlikely to recur, there’d be more to the Second Circuit’s avoidance of the issue in this case. But whether and to what extent the Fourth Amendment applies to information we voluntarily provide to third parties is hardly that kind of question, and the Second Circuit’s unconvincing refusal to answer that question in a context in which it is quite squarely presented is nothing short of feckless.
Paul Merrell

EFF Pries More Information on Zero Days from the Government's Grasp | Electronic Fronti... - 0 views

  • Until just last week, the U.S. government kept up the charade that its use of a stockpile of security vulnerabilities for hacking was a closely held secret.1 In fact, in response to EFF’s FOIA suit to get access to the official U.S. policy on zero days, the government redacted every single reference to “offensive” use of vulnerabilities. To add insult to injury, the government’s claim was that even admitting to offensive use would cause damage to national security. Now, in the face of EFF’s brief marshaling overwhelming evidence to the contrary, the charade is over. In response to EFF’s motion for summary judgment, the government has disclosed a new version of the Vulnerabilities Equities Process, minus many of the worst redactions. First and foremost, it now admits that the “discovery of vulnerabilities in commercial information technology may present competing ‘equities’ for the [government’s] offensive and defensive mission.” That might seem painfully obvious—a flaw or backdoor in a Juniper router is dangerous for anyone running a network, whether that network is in the U.S. or Iran. But the government’s failure to adequately weigh these “competing equities” was so severe that in 2013 a group of experts appointed by President Obama recommended that the policy favor disclosure “in almost all instances for widely used code.” [.pdf].
  • The newly disclosed version of the Vulnerabilities Equities Process (VEP) also officially confirms what everyone already knew: the use of zero days isn’t confined to the spies. Rather, the policy states that the “law enforcement community may want to use information pertaining to a vulnerability for similar offensive or defensive purposes but for the ultimate end of law enforcement.” Similarly it explains that “counterintelligence equities can be defensive, offensive, and/or law enforcement-related” and may “also have prosecutorial responsibilities.” Given that the government is currently prosecuting users for committing crimes over Tor hidden services, and that it identified these individuals using vulnerabilities called a “Network Investigative Technique”, this too doesn’t exactly come as a shocker. Just a few weeks ago, the government swore that even acknowledging the mere fact that it uses vulnerabilities offensively “could be expected to cause serious damage to the national security.” That’s a standard move in FOIA cases involving classified information, even though the government unnecessarily classifies documents at an astounding rate. In this case, the government relented only after nearly a year and a half of litigation by EFF. The government would be well advised to stop relying on such weak secrecy claims—it only risks undermining its own credibility.
  • The new version of the VEP also reveals significantly more information about the general process the government follows when a vulnerability is identified. In a nutshell, an agency that discovers a zero day is responsible for invoking the VEP, which then provides for centralized coordination and weighing of equities among all affected agencies. Along with a declaration from an official at the Office of the Director of National Intelligence, this new information provides more background on the reasons why the government decided to develop an overarching zero day policy in the first place: it “recognized that not all organizations see the entire picture of vulnerabilities, and each organization may have its own equities and concerns regarding the prioritization of patches and fixes, as well as its own distinct mission obligations.” We now know the VEP was finalized in February 2010, but the government apparently failed to implement it in any substantial way, prompting the presidential review group’s recommendation to prioritize disclosure over offensive hacking. We’re glad to have forced a little more transparency on this important issue, but the government is still foolishly holding on to a few last redactions, including refusing to name which agencies participate in the VEP. That’s just not supportable, and we’ll be in court next month to argue that the names of these agencies must be disclosed. 
Paul Merrell

Microsoft Pitches Technology That Can Read Facial Expressions at Political Rallies - 1 views

  • On the 21st floor of a high-rise hotel in Cleveland, in a room full of political operatives, Microsoft’s Research Division was advertising a technology that could read each facial expression in a massive crowd, analyze the emotions, and report back in real time. “You could use this at a Trump rally,” a sales representative told me. At both the Republican and Democratic conventions, Microsoft sponsored event spaces for the news outlet Politico. Politico, in turn, hosted a series of Microsoft-sponsored discussions about the use of data technology in political campaigns. And throughout Politico’s spaces in both Philadelphia and Cleveland, Microsoft advertised an array of products from “Microsoft Cognitive Services,” its artificial intelligence and cloud computing division. At one exhibit, titled “Realtime Crowd Insights,” a small camera scanned the room, while a monitor displayed the captured image. Every five seconds, a new image would appear with data annotated for each face — an assigned serial number, gender, estimated age, and any emotions detected in the facial expression. When I approached, the machine labeled me “b2ff” and correctly identified me as a 23-year-old male.
  • “Realtime Crowd Insights” is an Application Programming Interface (API), or a software tool that connects web applications to Microsoft’s cloud computing services. Through Microsoft’s emotional analysis API — a component of Realtime Crowd Insights — applications send an image to Microsoft’s servers. Microsoft’s servers then analyze the faces and return emotional profiles for each one. In a November blog post, Microsoft said that the emotional analysis could detect “anger, contempt, fear, disgust, happiness, neutral, sadness or surprise.” Microsoft’s sales representatives told me that political campaigns could use the technology to measure the emotional impact of different talking points — and political scientists could use it to study crowd response at rallies.
  • Facial recognition technology — the identification of faces by name — is already widely used in secret by law enforcement, sports stadiums, retail stores, and even churches, despite being of questionable legality. As early as 2002, facial recognition technology was used at the Super Bowl to cross-reference the 100,000 attendees to a database of the faces of known criminals. The technology is controversial enough that in 2013, Google tried to ban the use of facial recognition apps in its Google glass system. But “Realtime Crowd Insights” is not true facial recognition — it could not identify me by name, only as “b2ff.” It did, however, store enough data on each face that it could continuously identify it with the same serial number, even hours later. The display demonstrated that capability by distinguishing between the number of total faces it had seen, and the number of unique serial numbers. Photo: Alex Emmons
  • ...2 more annotations...
  • Instead, “Realtime Crowd Insights” is an example of facial characterization technology — where computers analyze faces without necessarily identifying them. Facial characterization has many positive applications — it has been tested in the classroom, as a tool for spotting struggling students, and Microsoft has boasted that the tool will even help blind people read the faces around them. But facial characterization can also be used to assemble and store large profiles of information on individuals, even anonymously.
  • Alvaro Bedoya, a professor at Georgetown Law School and expert on privacy and facial recognition, has hailed that code of conduct as evidence that Microsoft is trying to do the right thing. But he pointed out that it leaves a number of questions unanswered — as illustrated in Cleveland and Philadelphia. “It’s interesting that the app being shown at the convention ‘remembered’ the faces of the people who walked by. That would seem to suggest that their faces were being stored and processed without the consent that Microsoft’s policy requires,” Bedoya said. “You have to wonder: What happened to the face templates of the people who walked by that booth? Were they deleted? Or are they still in the system?” Microsoft officials declined to comment on exactly what information is collected on each face and what data is retained or stored, instead referring me to their privacy policy, which does not address the question. Bedoya also pointed out that Microsoft’s marketing did not seem to match the consent policy. “It’s difficult to envision how companies will obtain consent from people in large crowds or rallies.”
  •  
    But nobody is saying that the output of this technology can't be combined with the output of facial recognition technology to let them monitor you individually AND track your emotions. Fortunately, others are fighting back with knowledge and tech to block facial recognition. http://goo.gl/JMQM2W
Paul Merrell

The Senate has its own insincere net neutrality bill - 0 views

  • Now that the House of Representatives has floated a superficial net neutrality bill, it's the Senate's turn. Louisiana Senator John Kennedy has introduced a companion version of the Open Internet Preservation Act that effectively replicates the House measure put forward by Tennessee Representative Marsha Blackburn. As before, it supports net neutrality only on a basic level -- and there are provisions that would make it difficult to combat other abuses. The legislation would technically forbid internet providers from blocking and throttling content, but it wouldn't bar paid prioritization. Theoretically, ISPs could create de facto "slow lanes" for competing services by offering mediocre speeds unless they pay for faster connections. The bill would also curb the FCC's ability to deal with other violations, and would prevent states from passing their own net neutrality laws. In short, the bill is much more about limiting regulation than protecting open access and competition.Kennedy's bill isn't expected to go far in the Senate, just as Blackburn's hasn't done much in the House. However, his proposal comes mere days after senators put forward a Congressional Review Act that would undo the FCC's decision to kill net neutrality. Kennedy had claimed he was considering support for the CRA, but his proposal contradicts that -- why push a heavily watered-down bill if you were willing to revert to the stronger legislation? It's not a completely surprising move and is largely symbolic, but it's disappointing for those who hoped there would be truly bipartisan support for a return to net neutrality.
Paul Merrell

Theresa May to create new internet that would be controlled and regulated by government... - 1 views

  • Theresa May is planning to introduce huge regulations on the way the internet works, allowing the government to decide what is said online. Particular focus has been drawn to the end of the manifesto, which makes clear that the Tories want to introduce huge changes to the way the internet works. "Some people say that it is not for government to regulate when it comes to technology and the internet," it states. "We disagree." Senior Tories confirmed to BuzzFeed News that the phrasing indicates that the government intends to introduce huge restrictions on what people can post, share and publish online. The plans will allow Britain to become "the global leader in the regulation of the use of personal data and the internet", the manifesto claims. It comes just soon after the Investigatory Powers Act came into law. That legislation allowed the government to force internet companies to keep records on their customers' browsing histories, as well as giving ministers the power to break apps like WhatsApp so that messages can be read. The manifesto makes reference to those increased powers, saying that the government will work even harder to ensure there is no "safe space for terrorists to be able to communicate online". That is apparently a reference in part to its work to encourage technology companies to build backdoors into their encrypted messaging services – which gives the government the ability to read terrorists' messages, but also weakens the security of everyone else's messages, technology companies have warned.
  • The government now appears to be launching a similarly radical change in the way that social networks and internet companies work. While much of the internet is currently controlled by private businesses like Google and Facebook, Theresa May intends to allow government to decide what is and isn't published, the manifesto suggests. The new rules would include laws that make it harder than ever to access pornographic and other websites. The government will be able to place restrictions on seeing adult content and any exceptions would have to be justified to ministers, the manifesto suggests. The manifesto even suggests that the government might stop search engines like Google from directing people to pornographic websites. "We will put a responsibility on industry not to direct users – even unintentionally – to hate speech, pornography, or other sources of harm," the Conservatives write.
  • The laws would also force technology companies to delete anything that a person posted when they were under 18. But perhaps most unusually they would be forced to help controversial government schemes like its Prevent strategy, by promoting counter-extremist narratives. "In harnessing the digital revolution, we must take steps to protect the vulnerable and give people confidence to use the internet without fear of abuse, criminality or exposure to horrific content", the manifesto claims in a section called 'the safest place to be online'. The plans are in keeping with the Tories' commitment that the online world must be regulated as strongly as the offline one, and that the same rules should apply in both. "Our starting point is that online rules should reflect those that govern our lives offline," the Conservatives' manifesto says, explaining this justification for a new level of regulation. "It should be as unacceptable to bully online as it is in the playground, as difficult to groom a young child on the internet as it is in a community, as hard for children to access violent and degrading pornography online as it is in the high street, and as difficult to commit a crime digitally as it is physically."
  • ...2 more annotations...
  • The manifesto also proposes that internet companies will have to pay a levy, like the one currently paid by gambling firms. Just like with gambling, that money will be used to pay for advertising schemes to tell people about the dangers of the internet, in particular being used to "support awareness and preventative activity to counter internet harms", according to the manifesto. The Conservatives will also seek to regulate the kind of news that is posted online and how companies are paid for it. If elected, Theresa May will "take steps to protect the reliability and objectivity of information that is essential to our democracy" – and crack down on Facebook and Google to ensure that news companies get enough advertising money. If internet companies refuse to comply with the rulings – a suggestion that some have already made about the powers in the Investigatory Powers Act – then there will be a strict and strong set of ways to punish them. "We will introduce a sanctions regime to ensure compliance, giving regulators the ability to fine or prosecute those companies that fail in their legal duties, and to order the removal of content where it clearly breaches UK law," the manifesto reads. In laying out its plan for increased regulation, the Tories anticipate and reject potential criticism that such rules could put people at risk.
  • "While we cannot create this framework alone, it is for government, not private companies, to protect the security of people and ensure the fairness of the rules by which people and businesses abide," the document reads. "Nor do we agree that the risks of such an approach outweigh the potential benefits."
Paul Merrell

HART: Homeland Security's Massive New Database Will Include Face Recognition, DNA, and ... - 0 views

  • The U.S. Department of Homeland Security (DHS) is quietly building what will likely become the largest database of biometric and biographic data on citizens and foreigners in the United States. The agency’s new Homeland Advanced Recognition Technology (HART) database will include multiple forms of biometrics—from face recognition to DNA, data from questionable sources, and highly personal data on innocent people. It will be shared with federal agencies outside of DHS as well as state and local law enforcement and foreign governments. And yet, we still know very little about it.The records DHS plans to include in HART will chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate. Data like face recognition makes it possible to identify and track people in real time, including at lawful political protests and other gatherings. Other data DHS is planning to collect—including information about people’s “relationship patterns” and from officer “encounters” with the public—can be used to identify political affiliations, religious activities, and familial and friendly relationships. These data points are also frequently colored by conjecture and bias.
  • DHS currently collects a lot of data. Its legacy IDENT fingerprint database contains information on 220-million unique individuals and processes 350,000 fingerprint transactions every day. This is an exponential increase from 20 years ago when IDENT only contained information on 1.8-million people. Between IDENT and other DHS-managed databases, the agency manages over 10-billion biographic records and adds 10-15 million more each week.
  • DHS’s new HART database will allow the agency to vastly expand the types of records it can collect and store. HART will support at least seven types of biometric identifiers, including face and voice data, DNA, scars and tattoos, and a blanket category for “other modalities.” It will also include biographic information, like name, date of birth, physical descriptors, country of origin, and government ID numbers. And it will include data we know to by highly subjective, including information collected from officer “encounters” with the public and information about people’s “relationship patterns.”
  • ...1 more annotation...
  • DHS’s face recognition roll-out is especially concerning. The agency uses mobile biometric devices that can identify faces and capture face data in the field, allowing its ICE (immigration) and CBP (customs) officers to scan everyone with whom they come into contact, whether or not those people are suspected of any criminal activity or an immigration violation. DHS is also partnering with airlines and other third parties to collect face images from travelers entering and leaving the U.S. When combined with data from other government agencies, these troubling collection practices will allow DHS to build a database large enough to identify and track all people in public places, without their knowledge—not just in places the agency oversees, like airports, but anywhere there are cameras.Police abuse of facial recognition technology is not a theoretical issue: it’s happening today. Law enforcement has already used face recognition on public streets and at political protests. During the protests surrounding the death of Freddie Gray in 2015, Baltimore Police ran social media photos against a face recognition database to identify protesters and arrest them. Recent Amazon promotional videos encourage police agencies to acquire that company’s face “Rekognition” capabilities and use them with body cameras and smart cameras to track people throughout cities. At least two U.S. cities are already using Rekognition.DHS compounds face recognition’s threat to anonymity and free speech by planning to include “records related to the analysis of relationship patterns among individuals.” We don’t know where DHS or its external partners will be getting these “relationship pattern” records, but they could come from social media profiles and posts, which the government plans to track by collecting social media user names from all foreign travelers entering the country.
Paul Merrell

Banning end-to-end encryption being considered by Trump team- 9to5Mac - 0 views

  • The Trump administration is considering the possibility of banning end-to-end encryption, as used by services like Apple’s Messages and FaceTime, as well as competing platforms like WhatsApp and Signal. The topic was reportedly the main topic of a previously-unreported meeting of a National Security Council meeting on Wednesday … NordVPN Politico cites three sources for the story. Senior Trump administration officials met on Wednesday to discuss whether to seek legislation prohibiting tech companies from using forms of encryption that law enforcement can’t break — a provocative step that would reopen a long-running feud between federal authorities and Silicon Valley. The encryption challenge, which the government calls “going dark,” was the focus of a National Security Council meeting Wednesday morning that included the No. 2 officials from several key agencies, according to three people familiar with the matter. The meeting reportedly discussed two options. Senior officials debated whether to ask Congress to effectively outlaw end-to-end encryption, which scrambles data so that only its sender and recipient can read it […] “The two paths were to either put out a statement or a general position on encryption, and [say] that they would continue to work on a solution, or to ask Congress for legislation,” said one of the people. No decision was reached given strongly opposing views within the government.
Paul Merrell

Trump administration pulls back curtain on secretive cybersecurity process - The Washin... - 0 views

  • The White House on Wednesday made public for the first time the rules by which the government decides to disclose or keep secret software flaws that can be turned into cyberweapons — whether by U.S. agencies hacking for foreign intelligence, money-hungry criminals or foreign spies seeking to penetrate American computers. The move to publish an un­classified charter responds to years of criticism that the process was unnecessarily opaque, fueling suspicion that it cloaked a stockpile of software flaws that the National Security Agency was hoarding to go after foreign targets but that put Americans’ cyber­security at risk.
  • The rules are part of the “Vulnerabilities Equities Process,” which the Obama administration revamped in 2014 as a multi­agency forum to debate whether and when to inform companies such as Microsoft and Juniper that the government has discovered or bought a software flaw that, if weaponized, could affect the security of their product. The Trump administration has mostly not altered the rules under which the government reaches a decision but is disclosing its process. Under the VEP, an “equities review board” of at least a dozen national security and civilian agencies will meet monthly — or more often, if a need arises — to discuss newly discovered vulnerabilities. Besides the NSA, the CIA and the FBI, the list includes the Treasury, Commerce and State departments, and the Office of Management and Budget. The priority is on disclosure, the policy states, to protect core Internet systems, the U.S. economy and critical infrastructure, unless there is “a demonstrable, overriding interest” in using the flaw for intelligence or law enforcement purposes. The government has long said that it discloses the vast majority — more than 90 percent — of the vulnerabilities it discovers or buys in products from defense contractors or other sellers. In recent years, that has amounted to more than 100 a year, according to people familiar with the process. But because the process was classified, the National Security Council, which runs the discussion, was never able to reveal any numbers. Now, Joyce said, the number of flaws disclosed and the number retained will be made public in an annual report. A classified version will be sent to Congress, he said.
Paul Merrell

Judge "Disturbed" To Learn Google Tracks 'Incognito' Users, Demands Answers | ZeroHedge - 1 views

  • A US District Judge in San Jose, California says she was "disturbed" over Google's data collection practices, after learning that the company still collects and uses data from users in its Chrome browser's so-called 'incognito' mode - and has demanded an explanation "about what exactly Google does," according to Bloomberg.
  • In a class-action lawsuit that describes the company's private browsing claims as a "ruse" - and "seeks $5,000 in damages for each of the millions of people whose privacy has been compromised since June of 2016," US District Judge Lucy Koh said she finds it "unusual" that the company would make the "extra effort" to gather user data if it doesn't actually use the information for targeted advertising or to build user profiles.Koh has a long history with the Alphabet Inc. subsidiary, previously forcing the Mountain View, California-based company to disclose its scanning of emails for the purposes of targeted advertising and profile building.In this case, Google is accused of relying on pieces of its code within websites that use its analytics and advertising services to scrape users’ supposedly private browsing history and send copies of it to Google’s servers. Google makes it seem like private browsing mode gives users more control of their data, Amanda Bonn, a lawyer representing users, told Koh. In reality, “Google is saying there’s basically very little you can do to prevent us from collecting your data, and that’s what you should assume we’re doing,” Bonn said.Andrew Schapiro, a lawyer for Google, argued the company’s privacy policy “expressly discloses” its practices. “The data collection at issue is disclosed,” he said.Another lawyer for Google, Stephen Broome, said website owners who contract with the company to use its analytics or other services are well aware of the data collection described in the suit. -Bloomberg
  • Koh isn't buying it - arguing that the company is effectively tricking users under the impression that their information is not being transmitted to the company."I want a declaration from Google on what information they’re collecting on users to the court’s website, and what that’s used for," Koh demanded.The case is Brown v. Google, 20-cv-03664, U.S. District Court, Northern District of California (San Jose), via Bloomberg.
Paul Merrell

Google Engineer Leaks Nearly 1,000 Pages of Internal Documents, Alleging Bias, Censorship - 0 views

  • A former Google engineer has released nearly 1,000 pages of documents that he says prove that the company, at least in some of its products, secretly boosts or demotes content based on what it deems to be true or false, while publicly claiming to be a neutral platform. The software engineer, Zach Vorhies, first provided the documents to Project Veritas, a right-leaning investigative journalism nonprofit, as well as the Justice Department’s antitrust division, which has been investigating Google for potentially anti-competitive behavior.
  • When he returned to work, however, Google sent him a letter demanding, among other things, that he turn over his employee badge and work laptop, which he did, and “cease and desist” from disclosing “any non-public Google files.” Afraid for his safety, he posted on Twitter that if something would happen to him, all the documents he took would be released to the public.Google then did a “wellness check” on him, he said. The San Francisco police received a call that Vorhies may be mentally ill. A group of officers waited for him outside his house and put him in handcuffs. “This is a large way in which they intimidate their employees that go rogue on the company,” he said.Vorhies then decided that it would be safer for him to go public.
  • One of the goals of the effort was a “clean & regularly sanitized news corpus,” it reads.
  • ...1 more annotation...
  • Robert Epstein, a psychologist who has spent years researching Google’s influence on its users, has published research showing that just by deciding the sequence of top search results, the company can sway undecided voters.Epstein determined that this has led to 2.6 million votes shifting in the 2016 presidential election to Trump’s opponent, former Secretary of State Hillary Clinton. He warned that in 2020, if companies such as Google and Facebook all support the same candidate, they will be able to shift 15 million votes—well beyond the margin most presidents have won by.
Paul Merrell

WhatsApp sues Israel's NSO for allegedly helping spies hack phones around the world - R... - 0 views

  • WhatsApp sued Israeli surveillance firm NSO Group on Tuesday, accusing it of helping government spies break into the phones of roughly 1,400 users across four continents in a hacking spree whose targets included diplomats, political dissidents, journalists and senior government officials.
  • In a lawsuit filed in federal court in San Francisco, messaging service WhatsApp, which is owned by Facebook Inc (FB.O), accused NSO of facilitating government hacking sprees in 20 countries. Mexico, the United Arab Emirates and Bahrain were the only countries identified. WhatsApp said in a statement that 100 civil society members had been targeted, and called it “an unmistakable pattern of abuse.” NSO denied the allegations.
  • Citizen Lab, a cybersecurity research laboratory based at the University of Toronto that worked with WhatsApp to investigate the phone hacking, told Reuters that the targets included well-known television personalities, prominent women who had been subjected to online hate campaigns and people who had faced “assassination attempts and threats of violence.”
  • ...1 more annotation...
  • NSO came under particularly harsh scrutiny over the allegation that its spyware played a role in the death of Washington Post journalist Jamal Khashoggi, who was murdered at the Saudi Consulate in Istanbul a little over a year ago. Khashoggi’s friend Omar Abdulaziz is one of seven activists and journalists who have taken the spyware firm to court in Israel and Cyprus over allegations that their phones were compromised using NSO technology. Amnesty has also filed a lawsuit, demanding that the Israeli Ministry of Defense revoke NSO’s export license to “stop it profiting from state-sponsored repression.”
Paul Merrell

States to launch antitrust investigation into big tech companies, reports say | TechCrunch - 2 views

  • The state attorneys in more than a dozen states are preparing to begin an antitrust investigation of the tech giants, The Wall Street Journal and The New York Times reported Monday, putting the spotlight on an industry that is already facing federal scrutiny.The bipartisan group of attorneys from as many as 20 states is expected to formally launch a probe as soon as next month to assess whether tech companies are using their dominant market position to hurt competition, the WSJ reported.If true, the move follows the Department of Justice, which last month announced its own antitrust review of how online platforms scaled to their gigantic sizes and whether they are using their power to curb competition and stifle innovation. Earlier this year, the Federal Trade Commission formed a task force to monitor competition among tech platforms.
« First ‹ Previous 161 - 180 of 188 Next ›
Showing 20 items per page