Skip to main content

Home/ Future of the Web/ Group items tagged responsive

Rss Feed Group items tagged

Paul Merrell

Victory for Users: Librarian of Congress Renews and Expands Protections for Fair Uses |... - 0 views

  • The new rules for exemptions to copyright's DRM-circumvention laws were issued today, and the Librarian of Congress has granted much of what EFF asked for over the course of months of extensive briefs and hearings. The exemptions we requested—ripping DVDs and Blurays for making fair use remixes and analysis; preserving video games and running multiplayer servers after publishers have abandoned them; jailbreaking cell phones, tablets, and other portable computing devices to run third party software; and security research and modification and repairs on cars—have each been accepted, subject to some important caveats.
  • The exemptions are needed thanks to a fundamentally flawed law that forbids users from breaking DRM, even if the purpose is a clearly lawful fair use. As software has become ubiquitous, so has DRM.  Users often have to circumvent that DRM to make full use of their devices, from DVDs to games to smartphones and cars. The law allows users to request exemptions for such lawful uses—but it doesn’t make it easy. Exemptions are granted through an elaborate rulemaking process that takes place every three years and places a heavy burden on EFF and the many other requesters who take part. Every exemption must be argued anew, even if it was previously granted, and even if there is no opposition. The exemptions that emerge are limited in scope. What is worse, they only apply to end users—the people who are actually doing the ripping, tinkering, jailbreaking, or research—and not to the people who make the tools that facilitate those lawful activities. The section of the law that creates these restrictions—the Digital Millennium Copyright Act's Section 1201—is fundamentally flawed, has resulted in myriad unintended consequences, and is long past due for reform or removal altogether from the statute books. Still, as long as its rulemaking process exists, we're pleased to have secured the following exemptions.
  • The new rules are long and complicated, and we'll be posting more details about each as we get a chance to analyze them. In the meantime, we hope each of these exemptions enable more exciting fair uses that educate, entertain, improve the underlying technology, and keep us safer. A better long-terms solution, though, is to eliminate the need for this onerous rulemaking process. We encourage lawmakers to support efforts like the Unlocking Technology Act, which would limit the scope of Section 1201 to copyright infringements—not fair uses. And as the White House looks for the next Librarian of Congress, who is ultimately responsible for issuing the exemptions, we hope to get a candidate who acts—as a librarian should—in the interest of the public's access to information.
Paul Merrell

Cy Vance's Proposal to Backdoor Encrypted Devices Is Riddled With Vulnerabilities | Jus... - 0 views

  • Less than a week after the attacks in Paris — while the public and policymakers were still reeling, and the investigation had barely gotten off the ground — Cy Vance, Manhattan’s District Attorney, released a policy paper calling for legislation requiring companies to provide the government with backdoor access to their smartphones and other mobile devices. This is the first concrete proposal of this type since September 2014, when FBI Director James Comey reignited the “Crypto Wars” in response to Apple’s and Google’s decisions to use default encryption on their smartphones. Though Comey seized on Apple’s and Google’s decisions to encrypt their devices by default, his concerns are primarily related to end-to-end encryption, which protects communications that are in transit. Vance’s proposal, on the other hand, is only concerned with device encryption, which protects data stored on phones. It is still unclear whether encryption played any role in the Paris attacks, though we do know that the attackers were using unencrypted SMS text messages on the night of the attack, and that some of them were even known to intelligence agencies and had previously been under surveillance. But regardless of whether encryption was used at some point during the planning of the attacks, as I lay out below, prohibiting companies from selling encrypted devices would not prevent criminals or terrorists from being able to access unbreakable encryption. Vance’s primary complaint is that Apple’s and Google’s decisions to provide their customers with more secure devices through encryption interferes with criminal investigations. He claims encryption prevents law enforcement from accessing stored data like iMessages, photos and videos, Internet search histories, and third party app data. He makes several arguments to justify his proposal to build backdoors into encrypted smartphones, but none of them hold water.
  • Before addressing the major privacy, security, and implementation concerns that his proposal raises, it is worth noting that while an increase in use of fully encrypted devices could interfere with some law enforcement investigations, it will help prevent far more crimes — especially smartphone theft, and the consequent potential for identity theft. According to Consumer Reports, in 2014 there were more than two million victims of smartphone theft, and nearly two-thirds of all smartphone users either took no steps to secure their phones or their data or failed to implement passcode access for their phones. Default encryption could reduce instances of theft because perpetrators would no longer be able to break into the phone to steal the data.
  • Vance argues that creating a weakness in encryption to allow law enforcement to access data stored on devices does not raise serious concerns for security and privacy, since in order to exploit the vulnerability one would need access to the actual device. He considers this an acceptable risk, claiming it would not be the same as creating a widespread vulnerability in encryption protecting communications in transit (like emails), and that it would be cheap and easy for companies to implement. But Vance seems to be underestimating the risks involved with his plan. It is increasingly important that smartphones and other devices are protected by the strongest encryption possible. Our devices and the apps on them contain astonishing amounts of personal information, so much that an unprecedented level of harm could be caused if a smartphone or device with an exploitable vulnerability is stolen, not least in the forms of identity fraud and credit card theft. We bank on our phones, and have access to credit card payments with services like Apple Pay. Our contact lists are stored on our phones, including phone numbers, emails, social media accounts, and addresses. Passwords are often stored on people’s phones. And phones and apps are often full of personal details about their lives, from food diaries to logs of favorite places to personal photographs. Symantec conducted a study, where the company spread 50 “lost” phones in public to see what people who picked up the phones would do with them. The company found that 95 percent of those people tried to access the phone, and while nearly 90 percent tried to access private information stored on the phone or in other private accounts such as banking services and email, only 50 percent attempted contacting the owner.
  • ...8 more annotations...
  • Vance attempts to downplay this serious risk by asserting that anyone can use the “Find My Phone” or Android Device Manager services that allow owners to delete the data on their phones if stolen. However, this does not stand up to scrutiny. These services are effective only when an owner realizes their phone is missing and can take swift action on another computer or device. This delay ensures some period of vulnerability. Encryption, on the other hand, protects everyone immediately and always. Additionally, Vance argues that it is safer to build backdoors into encrypted devices than it is to do so for encrypted communications in transit. It is true that there is a difference in the threats posed by the two types of encryption backdoors that are being debated. However, some manner of widespread vulnerability will inevitably result from a backdoor to encrypted devices. Indeed, the NSA and GCHQ reportedly hacked into a database to obtain cell phone SIM card encryption keys in order defeat the security protecting users’ communications and activities and to conduct surveillance. Clearly, the reality is that the threat of such a breach, whether from a hacker or a nation state actor, is very real. Even if companies go the extra mile and create a different means of access for every phone, such as a separate access key for each phone, significant vulnerabilities will be created. It would still be possible for a malicious actor to gain access to the database containing those keys, which would enable them to defeat the encryption on any smartphone they took possession of. Additionally, the cost of implementation and maintenance of such a complex system could be high.
  • Privacy is another concern that Vance dismisses too easily. Despite Vance’s arguments otherwise, building backdoors into device encryption undermines privacy. Our government does not impose a similar requirement in any other context. Police can enter homes with warrants, but there is no requirement that people record their conversations and interactions just in case they someday become useful in an investigation. The conversations that we once had through disposable letters and in-person conversations now happen over the Internet and on phones. Just because the medium has changed does not mean our right to privacy has.
  • In addition to his weak reasoning for why it would be feasible to create backdoors to encrypted devices without creating undue security risks or harming privacy, Vance makes several flawed policy-based arguments in favor of his proposal. He argues that criminals benefit from devices that are protected by strong encryption. That may be true, but strong encryption is also a critical tool used by billions of average people around the world every day to protect their transactions, communications, and private information. Lawyers, doctors, and journalists rely on encryption to protect their clients, patients, and sources. Government officials, from the President to the directors of the NSA and FBI, and members of Congress, depend on strong encryption for cybersecurity and data security. There are far more innocent Americans who benefit from strong encryption than there are criminals who exploit it. Encryption is also essential to our economy. Device manufacturers could suffer major economic losses if they are prohibited from competing with foreign manufacturers who offer more secure devices. Encryption also protects major companies from corporate and nation-state espionage. As more daily business activities are done on smartphones and other devices, they may now hold highly proprietary or sensitive information. Those devices could be targeted even more than they are now if all that has to be done to access that information is to steal an employee’s smartphone and exploit a vulnerability the manufacturer was required to create.
  • Vance also suggests that the US would be justified in creating such a requirement since other Western nations are contemplating requiring encryption backdoors as well. Regardless of whether other countries are debating similar proposals, we cannot afford a race to the bottom on cybersecurity. Heads of the intelligence community regularly warn that cybersecurity is the top threat to our national security. Strong encryption is our best defense against cyber threats, and following in the footsteps of other countries by weakening that critical tool would do incalculable harm. Furthermore, even if the US or other countries did implement such a proposal, criminals could gain access to devices with strong encryption through the black market. Thus, only innocent people would be negatively affected, and some of those innocent people might even become criminals simply by trying to protect their privacy by securing their data and devices. Finally, Vance argues that David Kaye, UN Special Rapporteur for Freedom of Expression and Opinion, supported the idea that court-ordered decryption doesn’t violate human rights, provided certain criteria are met, in his report on the topic. However, in the context of Vance’s proposal, this seems to conflate the concepts of court-ordered decryption and of government-mandated encryption backdoors. The Kaye report was unequivocal about the importance of encryption for free speech and human rights. The report concluded that:
  • States should promote strong encryption and anonymity. National laws should recognize that individuals are free to protect the privacy of their digital communications by using encryption technology and tools that allow anonymity online. … States should not restrict encryption and anonymity, which facilitate and often enable the rights to freedom of opinion and expression. Blanket prohibitions fail to be necessary and proportionate. States should avoid all measures that weaken the security that individuals may enjoy online, such as backdoors, weak encryption standards and key escrows. Additionally, the group of intelligence experts that was hand-picked by the President to issue a report and recommendations on surveillance and technology, concluded that: [R]egarding encryption, the U.S. Government should: (1) fully support and not undermine efforts to create encryption standards; (2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and (3) increase the use of encryption and urge US companies to do so, in order to better protect data in transit, at rest, in the cloud, and in other storage.
  • The clear consensus among human rights experts and several high-ranking intelligence experts, including the former directors of the NSA, Office of the Director of National Intelligence, and DHS, is that mandating encryption backdoors is dangerous. Unaddressed Concerns: Preventing Encrypted Devices from Entering the US and the Slippery Slope In addition to the significant faults in Vance’s arguments in favor of his proposal, he fails to address the question of how such a restriction would be effectively implemented. There is no effective mechanism for preventing code from becoming available for download online, even if it is illegal. One critical issue the Vance proposal fails to address is how the government would prevent, or even identify, encrypted smartphones when individuals bring them into the United States. DHS would have to train customs agents to search the contents of every person’s phone in order to identify whether it is encrypted, and then confiscate the phones that are. Legal and policy considerations aside, this kind of policy is, at the very least, impractical. Preventing strong encryption from entering the US is not like preventing guns or drugs from entering the country — encrypted phones aren’t immediately obvious as is contraband. Millions of people use encrypted devices, and tens of millions more devices are shipped to and sold in the US each year.
  • Finally, there is a real concern that if Vance’s proposal were accepted, it would be the first step down a slippery slope. Right now, his proposal only calls for access to smartphones and devices running mobile operating systems. While this policy in and of itself would cover a number of commonplace devices, it may eventually be expanded to cover laptop and desktop computers, as well as communications in transit. The expansion of this kind of policy is even more worrisome when taking into account the speed at which technology evolves and becomes widely adopted. Ten years ago, the iPhone did not even exist. Who is to say what technology will be commonplace in 10 or 20 years that is not even around today. There is a very real question about how far law enforcement will go to gain access to information. Things that once seemed like merely science fiction, such as wearable technology and artificial intelligence that could be implanted in and work with the human nervous system, are now available. If and when there comes a time when our “smart phone” is not really a device at all, but is rather an implant, surely we would not grant law enforcement access to our minds.
  • Policymakers should dismiss Vance’s proposal to prohibit the use of strong encryption to protect our smartphones and devices in order to ensure law enforcement access. Undermining encryption, regardless of whether it is protecting data in transit or at rest, would take us down a dangerous and harmful path. Instead, law enforcement and the intelligence community should be working to alter their skills and tactics in a fast-evolving technological world so that they are not so dependent on information that will increasingly be protected by encryption.
Gonzalo San Gil, PhD.

¿Existe exceso de uniformidad en la gran industria discográfica actual? | Ind... - 0 views

  •  
    "No hemos publicado más discos en toda la historia de Warner, yo no he publicado más discos que ahora, hay una gran producción y con una diversidad enorme. Lo que si es verdad es que nunca antes en la historia yo he visto una uniformidad más absoluta en lo que son las listas de ventas del mundo, "
Paul Merrell

Microsoft Helping to Store Police Video From Taser Body Cameras | nsnbc international - 0 views

  • Microsoft has joined forces with Taser to combine the Azure cloud platform with law enforcement management tools.
  • Taser’s Axon body camera data management software on Evidence.com will run on Azure and Windows 10 devices to integrate evidence collection, analysis, and archival features as set forth by the Federal Bureau of Investigation Criminal Justice Information Services (CJIS) Security Policy. As per the partnership, Taser will utilize Azure’s machine learning and computing technologies to store police data on Microsoft’s government cloud. In addition, redaction capabilities of Taser will be improved which will assist police departments that are subject to bulk data requests. Currently, Taser is operating on Amazon Web Services; however this deal may entice police departments to upgrade their technology, which in turn would drive up sales of Windows 10. This partnership comes after Taser was given a lucrative deal with the Los Angeles Police Department (LAPD) last year, who ordered 7,000 body cameras equipped with 800 Axom body cameras for their officers in response to the recent deaths of several African Americans at the hands of police.
  • In order to ensure Taser maintains a monopoly on police body cameras, the corporation acquired contracts with police departments all across the nation for the purchase of body cameras through dubious ties to certain chiefs of police. The corporation announced in 2014 that “orders for body cameras [has] soared to $24.6 million from October to December” which represents a 5-fold increase in profits from 2013. Currently, Taser is in 13 cities with negotiations for new contracts being discussed in 28 more. Taser, according to records and interviews, allegedly has “financial ties to police chiefs whose departments have bought the recording devices.” In fact, Taser has been shown to provide airfare and luxury hotels for chiefs of police when traveling for speaking engagements in Australia and the United Arab Emirates (UAE); and hired them as consultants – among other perks and deals. Since 2013, Taser has been contractually bound with “consulting agreements with two such chiefs’ weeks after they retired” as well as is allegedly “in talks with a third who also backed the purchase of its products.”
Paul Merrell

China Just Launched the Most Frightening Game Ever - and Soon It Will Be Mandatory - 0 views

  • As if further proof were needed Orwell’s dystopia is now upon us, China has now gamified obedience to the State. Though that is every bit as creepily terrifying as it sounds, citizens may still choose whether or not they wish to opt-in — that is, until the program becomes compulsory in 2020. “Going under the innocuous name of ‘Sesame Credit,’ China has created a score for how good a citizen you are,” explains Extra Credits’ video about the program. “The owners of China’s largest social networks have partnered with the government to create something akin to the U.S. credit score — but, instead of measuring how regularly you pay your bills, it measures how obediently you follow the party line.”
  • In the works for years, China’s ‘social credit system’ aims to create a docile, compliant citizenry who are fiscally and morally responsible by employing a game-like format to create self-imposed, group social control. In other words, China gamified peer pressure to control its citizenry; and, though the scheme hasn’t been fully implemented yet, it’s already working — insidiously well.
  • The system is run by two companies, Alibaba and Tencent, which run all the social networks in China and therefore have access to a vast amount of data about people’s social ties and activities and what they say. In addition to measuring your ability to pay, as in the United States, the scores serve as a measure of political compliance. Among the things that will hurt a citizen’s score are posting political opinions without prior permission, or posting information that the regime does not like, such as about the Tiananmen Square massacre that the government carried out to hold on to power, or the Shanghai stock market collapse. It will hurt your score not only if you do these things, but if any of your friends do them.” And, in what appears likely the goal of the entire program, added, “Imagine the social pressure against disobedience or dissent that this will create.”
  • ...1 more annotation...
  • As Creemers described to Dutch news outlet, de Volkskrant, “With the help of the latest internet technologies, the government wants to exercise individual surveillance. The Chinese aim […] is clearly an attempt to create a new citizen.”
Paul Merrell

Microsoft Pitches Technology That Can Read Facial Expressions at Political Rallies - 1 views

  • On the 21st floor of a high-rise hotel in Cleveland, in a room full of political operatives, Microsoft’s Research Division was advertising a technology that could read each facial expression in a massive crowd, analyze the emotions, and report back in real time. “You could use this at a Trump rally,” a sales representative told me. At both the Republican and Democratic conventions, Microsoft sponsored event spaces for the news outlet Politico. Politico, in turn, hosted a series of Microsoft-sponsored discussions about the use of data technology in political campaigns. And throughout Politico’s spaces in both Philadelphia and Cleveland, Microsoft advertised an array of products from “Microsoft Cognitive Services,” its artificial intelligence and cloud computing division. At one exhibit, titled “Realtime Crowd Insights,” a small camera scanned the room, while a monitor displayed the captured image. Every five seconds, a new image would appear with data annotated for each face — an assigned serial number, gender, estimated age, and any emotions detected in the facial expression. When I approached, the machine labeled me “b2ff” and correctly identified me as a 23-year-old male.
  • “Realtime Crowd Insights” is an Application Programming Interface (API), or a software tool that connects web applications to Microsoft’s cloud computing services. Through Microsoft’s emotional analysis API — a component of Realtime Crowd Insights — applications send an image to Microsoft’s servers. Microsoft’s servers then analyze the faces and return emotional profiles for each one. In a November blog post, Microsoft said that the emotional analysis could detect “anger, contempt, fear, disgust, happiness, neutral, sadness or surprise.” Microsoft’s sales representatives told me that political campaigns could use the technology to measure the emotional impact of different talking points — and political scientists could use it to study crowd response at rallies.
  • Facial recognition technology — the identification of faces by name — is already widely used in secret by law enforcement, sports stadiums, retail stores, and even churches, despite being of questionable legality. As early as 2002, facial recognition technology was used at the Super Bowl to cross-reference the 100,000 attendees to a database of the faces of known criminals. The technology is controversial enough that in 2013, Google tried to ban the use of facial recognition apps in its Google glass system. But “Realtime Crowd Insights” is not true facial recognition — it could not identify me by name, only as “b2ff.” It did, however, store enough data on each face that it could continuously identify it with the same serial number, even hours later. The display demonstrated that capability by distinguishing between the number of total faces it had seen, and the number of unique serial numbers. Photo: Alex Emmons
  • ...2 more annotations...
  • Instead, “Realtime Crowd Insights” is an example of facial characterization technology — where computers analyze faces without necessarily identifying them. Facial characterization has many positive applications — it has been tested in the classroom, as a tool for spotting struggling students, and Microsoft has boasted that the tool will even help blind people read the faces around them. But facial characterization can also be used to assemble and store large profiles of information on individuals, even anonymously.
  • Alvaro Bedoya, a professor at Georgetown Law School and expert on privacy and facial recognition, has hailed that code of conduct as evidence that Microsoft is trying to do the right thing. But he pointed out that it leaves a number of questions unanswered — as illustrated in Cleveland and Philadelphia. “It’s interesting that the app being shown at the convention ‘remembered’ the faces of the people who walked by. That would seem to suggest that their faces were being stored and processed without the consent that Microsoft’s policy requires,” Bedoya said. “You have to wonder: What happened to the face templates of the people who walked by that booth? Were they deleted? Or are they still in the system?” Microsoft officials declined to comment on exactly what information is collected on each face and what data is retained or stored, instead referring me to their privacy policy, which does not address the question. Bedoya also pointed out that Microsoft’s marketing did not seem to match the consent policy. “It’s difficult to envision how companies will obtain consent from people in large crowds or rallies.”
  •  
    But nobody is saying that the output of this technology can't be combined with the output of facial recognition technology to let them monitor you individually AND track your emotions. Fortunately, others are fighting back with knowledge and tech to block facial recognition. http://goo.gl/JMQM2W
Paul Merrell

Challenge to data transfer tool used by Facebook will go to Europe's top court | TechCr... - 1 views

  • The five-week court hearing in what is a complex case delving into detail on US surveillance operations took place in February. The court issued its ruling today. The 153-page ruling starts by noting “this is an unusual case”, before going into a detailed discussion of the arguments and concluding that the DPC’s concerns about the validity of SCCs should be referred to the European Court of Justice for a preliminary ruling. Schrems is also the man responsible for bringing, in 2013, a legal challenge that ultimately struck down Safe Harbor — the legal mechanism that had oiled the pipe for EU-US personal data flows for fifteen years before the ECJ ruled it to be invalid in October 2015. Schrems’ argument had centered on U.S. government mass surveillance programs, as disclosed via the Snowden leaks, being incompatible with fundamental European privacy rights. After the ECJ struck down Safe Harbor he then sought to apply the same arguments against Facebook’s use of SCCs — returning to Ireland to make the complaint as that’s where the company has its European HQ. It’s worth noting that the European Commission has since replaced Safe Harbor with a new (and it claims more robust) data transfer mechanism, called the EU-US Privacy Shield — which is now, as Safe Harbor was, used by thousands of businesses. Although that too is facing legal challenges as critics continue to argue there is a core problem of incompatibility between two distinct legal regimes where EU privacy rights collide with US mass surveillance.
  • Schrems’ Safe Harbor challenge also started in the Irish Court before being ultimately referred to the ECJ. So there’s more than a little legal deja vu here, especially given the latest development in the case. In its ruling on the SCC issue, the Irish Court noted that a US ombudsperson position created under Privacy Shield to handle EU citizens complaints about companies’ handling of their data is not enough to overcome what it described as “well founded concerns” raised by the DPC regarding the adequacy of the protections for EU citizens data.
  • Making a video statement outside court in Dublin today, Schrems said the Irish court had dismissed Facebook’s argument that the US government does not undertake any surveillance.
  • ...3 more annotations...
  • In a written statement on the ruling Schrems added: “I welcome the judgement by the Irish High Court. It is important that a neutral Court outside of the US has summarized the facts on US surveillance in a judgement, after diving through more than 45,000 pages of documents in a five week hearing.
  • On Facebook, he also said: “In simple terms, US law requires Facebook to help the NSA with mass surveillance and EU law prohibits just that. As Facebook is subject to both jurisdictions, they got themselves in a legal dilemma that they cannot possibly solve in the long run.”
  • While Schrems’ original complaint pertained to Facebook, the Irish DPC’s position means many more companies that use the mechanism could face disruption if SCCs are ultimately invalidated as a result of the legal challenge to their validity.
Paul Merrell

Facebook blasted by US and UK lawmakers - nsnbc international | nsnbc international - 0 views

  • Lawmakers in the United States and the United Kingdom are calling on Facebook chief executive Mark Zuckerberg to explain how the names, preferences and other information from tens of millions of users ended up in the hands of the Cambridge Analytica data analysis firm.
  • After Facebook cited data privacy policies violations and announced that it was suspending the Cambridge Analytica data analytics firm also tied to the Trump campaign, new revelations have emerged. On Saturday, reports revealed that Cambridge Analytica, used a feature once available to Facebook app developers to collect information on some 270,000 people. In the process, the company, which was, at the time, handling U.S. President Donald Trump’s presidential campaign, gained access to data on tens of millions of their Facebook “friends” and that it wasn’t clear at all if any of these people had given explicit permission for this kind of sharing. Facebook’s Deputy General Counsel Paul Grewal said in a statement, “We will take legal action if necessary to hold them responsible and accountable for any unlawful behavior.”
  • The social media giant also added that it was continuing to investigate the claims. According to reports, Cambridge Analytica worked for the failed presidential campaign of U.S. Senator Ted Cruz and then for the presidential campaign of Donald Trump. Federal Election Commission records reportedly show that Trump’s campaign hired Cambridge Analytica in June 2016 and paid it more than $6.2 million. On its website, the company says that it “provided the Donald J. Trump for President campaign with the expertise and insights that helped win the White House.” Cambridge Analytica also mentions that it uses “behavioral microtargeting,” or combining analysis of people’s personalities with demographics, to predict and influence mass behavior.  According to the company, it has data on 220 million Americans, two thirds of the U.S. population. Cambridge Analytica says it has worked on other campaigns in the United States and other countries, and it is funded by Robert Mercer, a prominent supporter of politically conservative groups.
  • ...1 more annotation...
  • Facebook stated that it suspended Cambridge Analytica and its parent group Strategic Communication Laboratories (SCL) after receiving reports that they did not delete information about Facebook users that had been inappropriately shared. For months now, both the companies have been embroiled in investigations in Washington and London but the recent demands made by lawmakers focused explicitly on Zuckerberg, who has not testified publicly on these matters in either nation.
Paul Merrell

New Documents Reveal FBI's "Cozy" Relationship with Geek Squad - 1 views

  • Throughout the past ten years, the FBI has at varying points in time maintained a particularly close relationship with Best Buy officials and used the company’s Geek Squad employees as informants. But the FBI refuses to confirm or deny key information about how the agency may potentially circumvent computer owners’ Fourth Amendment rights. The Electronic Frontier Foundation (EFF) obtained a handful of documents in response to a Freedom of Information Act (FOIA) lawsuit filed in February of last year. EFF says they show the relationship between the FBI and Geek Squad employees is much “cozier” than they thought.
  • In court filings, the defense mentioned there were “eight FBI informants at Geek Squad City” from 2007 to 2012. Multiple employees received payments ranging from $500-1000 for work as informants.
  • There is no evidence that FBI obtained warrants before the Geek Squad informants searched computers they were repairing. It is believed Geek Squad employees routinely search unallocated space for any illegal content that may be on a device and then alert the FBI after conducting “fishing expeditions” for criminal activity, and this is what the FBI trains them to do. EFF sought “records about the extent to which [the FBI] directs and trains Best Buy employees to conduct warrantless searches of people’s devices.” As is clear, the government stonewalled EFF and only released documents that were already referred to by news media. The FBI neither confirmed nor denied whether the agency has “similar relationships with other computer repair facilities or businesses.” The FBI also would not produce any documents that detailed procedures or “training materials” for cultivating informants at computer repair facilities.
Gonzalo San Gil, PhD.

UK "Porn Filter" Triggers Widespread Internet Censorship | TorrentFreak - 2 views

  •  
    " Ernesto on July 2, 2014 C: 38 Breaking A new tool released by the Open Rights Group today reveals that 20% of the 100,000 most-visited websites on the Internet are blocked by the parental filters of UK ISPs. With the newly launched website the group makes it easier to expose false positives and show that the blocking efforts ban many legitimate sites, TorrentFreak included. "
  •  
    " Ernesto on July 2, 2014 C: 38 Breaking A new tool released by the Open Rights Group today reveals that 20% of the 100,000 most-visited websites on the Internet are blocked by the parental filters of UK ISPs. With the newly launched website the group makes it easier to expose false positives and show that the blocking efforts ban many legitimate sites, TorrentFreak included. "
Paul Merrell

Google Concealed Data Breach Over Fear Of Repercussions; Shuts Down Google+ Service | Z... - 0 views

  • Google opted in the Spring not to disclose that the data of hundreds of thousands of Google+ users had been exposed because the company says they found no evidence of misuse, reports the Wall Street Journal. The Silicon Valley giant feared both regulatory scrutiny and regulatory damage, according to documents reviewed by the Journal and people briefed on the incident.  In response to being busted, Google parent Alphabet is set to announce broad privacy measures which include permanently shutting down all consumer functionality of Google+, a move which "effectively puts the final nail in the coffin of a product that was launched in 2011 to challenge Facebook, and is widely seen as one of Google's biggest failures."  Shares in Alphabet fell as much as 2.1% following the Journal's report: 
  • The software glitch gave outside developers access to private Google+ profile data between 2015 and March 2018, after Google internal investigators found the problem and fixed it. According to a memo prepared by Google's legal and policy staff and reviewed by the Journal, senior executives worried that disclosing the incident would probably trigger "immediate regulatory interest," while inviting comparisons to Facebook's massive data harvesting scandal. 
Paul Merrell

Theresa May to create new internet that would be controlled and regulated by government... - 1 views

  • Theresa May is planning to introduce huge regulations on the way the internet works, allowing the government to decide what is said online. Particular focus has been drawn to the end of the manifesto, which makes clear that the Tories want to introduce huge changes to the way the internet works. "Some people say that it is not for government to regulate when it comes to technology and the internet," it states. "We disagree." Senior Tories confirmed to BuzzFeed News that the phrasing indicates that the government intends to introduce huge restrictions on what people can post, share and publish online. The plans will allow Britain to become "the global leader in the regulation of the use of personal data and the internet", the manifesto claims. It comes just soon after the Investigatory Powers Act came into law. That legislation allowed the government to force internet companies to keep records on their customers' browsing histories, as well as giving ministers the power to break apps like WhatsApp so that messages can be read. The manifesto makes reference to those increased powers, saying that the government will work even harder to ensure there is no "safe space for terrorists to be able to communicate online". That is apparently a reference in part to its work to encourage technology companies to build backdoors into their encrypted messaging services – which gives the government the ability to read terrorists' messages, but also weakens the security of everyone else's messages, technology companies have warned.
  • The government now appears to be launching a similarly radical change in the way that social networks and internet companies work. While much of the internet is currently controlled by private businesses like Google and Facebook, Theresa May intends to allow government to decide what is and isn't published, the manifesto suggests. The new rules would include laws that make it harder than ever to access pornographic and other websites. The government will be able to place restrictions on seeing adult content and any exceptions would have to be justified to ministers, the manifesto suggests. The manifesto even suggests that the government might stop search engines like Google from directing people to pornographic websites. "We will put a responsibility on industry not to direct users – even unintentionally – to hate speech, pornography, or other sources of harm," the Conservatives write.
  • The laws would also force technology companies to delete anything that a person posted when they were under 18. But perhaps most unusually they would be forced to help controversial government schemes like its Prevent strategy, by promoting counter-extremist narratives. "In harnessing the digital revolution, we must take steps to protect the vulnerable and give people confidence to use the internet without fear of abuse, criminality or exposure to horrific content", the manifesto claims in a section called 'the safest place to be online'. The plans are in keeping with the Tories' commitment that the online world must be regulated as strongly as the offline one, and that the same rules should apply in both. "Our starting point is that online rules should reflect those that govern our lives offline," the Conservatives' manifesto says, explaining this justification for a new level of regulation. "It should be as unacceptable to bully online as it is in the playground, as difficult to groom a young child on the internet as it is in a community, as hard for children to access violent and degrading pornography online as it is in the high street, and as difficult to commit a crime digitally as it is physically."
  • ...2 more annotations...
  • The manifesto also proposes that internet companies will have to pay a levy, like the one currently paid by gambling firms. Just like with gambling, that money will be used to pay for advertising schemes to tell people about the dangers of the internet, in particular being used to "support awareness and preventative activity to counter internet harms", according to the manifesto. The Conservatives will also seek to regulate the kind of news that is posted online and how companies are paid for it. If elected, Theresa May will "take steps to protect the reliability and objectivity of information that is essential to our democracy" – and crack down on Facebook and Google to ensure that news companies get enough advertising money. If internet companies refuse to comply with the rulings – a suggestion that some have already made about the powers in the Investigatory Powers Act – then there will be a strict and strong set of ways to punish them. "We will introduce a sanctions regime to ensure compliance, giving regulators the ability to fine or prosecute those companies that fail in their legal duties, and to order the removal of content where it clearly breaches UK law," the manifesto reads. In laying out its plan for increased regulation, the Tories anticipate and reject potential criticism that such rules could put people at risk.
  • "While we cannot create this framework alone, it is for government, not private companies, to protect the security of people and ensure the fairness of the rules by which people and businesses abide," the document reads. "Nor do we agree that the risks of such an approach outweigh the potential benefits."
Paul Merrell

The New Snowden? NSA Contractor Arrested Over Alleged Theft Of Classified Data - 0 views

  • A contractor working for the National Security Agency (NSA) was arrested by the FBI following his alleged theft of “state secrets.” More specifically, the contractor, Harold Thomas Martin, is charged with stealing highly classified source codes developed to covertly hack the networks of foreign governments, according to several senior law enforcement and intelligence officials. The Justice Department has said that these stolen materials were “critical to national security.” Martin was employed by Booz Allen Hamilton, the company responsible for most of the NSA’s most sensitive cyber-operations. Edward Snowden, the most well-known NSA whistleblower, also worked for Booz Allen Hamilton until he fled to Hong Kong in 2013 where he revealed a trove of documents exposing the massive scope of the NSA dragnet surveillance. That surveillance system was shown to have targeted untold numbers of innocent Americans. According to the New York Times, the theft “raises the embarrassing prospect” that an NSA insider managed to steal highly damaging secret information from the NSA for the second time in three years, not to mention the “Shadow Broker” hack this past August, which made classified NSA hacking tools available to the public.
  • Snowden himself took to Twitter to comment on the arrest. In a tweet, he said the news of Martin’s arrest “is huge” and asked, “Did the FBI secretly arrest the person behind the reports [that the] NSA sat on huge flaws in US products?” It is currently unknown if Martin was connected to those reports as well.
  • It also remains to be seen what Martin’s motivations were in removing classified data from the NSA. Though many suspect that he planned to follow in Snowden’s footsteps, the government will more likely argue that he had planned to commit espionage by selling state secrets to “adversaries.” According to the New York Times article on the arrest, Russia, China, Iran, and North Korea are named as examples of the “adversaries” who would have been targeted by the NSA codes that Martin is accused of stealing. However, Snowden revealed widespread US spying on foreign governments including several US allies such as France and Germany. This suggests that the stolen “source codes” were likely utilized on a much broader scale.
Paul Merrell

The Million Dollar Dissident: NSO Group's iPhone Zero-Days used against a UAE Human Rig... - 0 views

  • 1. Executive Summary Ahmed Mansoor is an internationally recognized human rights defender, based in the United Arab Emirates (UAE), and recipient of the Martin Ennals Award (sometimes referred to as a “Nobel Prize for human rights”).  On August 10 and 11, 2016, Mansoor received SMS text messages on his iPhone promising “new secrets” about detainees tortured in UAE jails if he clicked on an included link. Instead of clicking, Mansoor sent the messages to Citizen Lab researchers.  We recognized the links as belonging to an exploit infrastructure connected to NSO Group, an Israel-based “cyber war” company that sells Pegasus, a government-exclusive “lawful intercept” spyware product.  NSO Group is reportedly owned by an American venture capital firm, Francisco Partners Management. The ensuing investigation, a collaboration between researchers from Citizen Lab and from Lookout Security, determined that the links led to a chain of zero-day exploits (“zero-days”) that would have remotely jailbroken Mansoor’s stock iPhone 6 and installed sophisticated spyware.  We are calling this exploit chain Trident.  Once infected, Mansoor’s phone would have become a digital spy in his pocket, capable of employing his iPhone’s camera and microphone to snoop on activity in the vicinity of the device, recording his WhatsApp and Viber calls, logging messages sent in mobile chat apps, and tracking his movements.   We are not aware of any previous instance of an iPhone remote jailbreak used in the wild as part of a targeted attack campaign, making this a rare find.
  • The Trident Exploit Chain: CVE-2016-4657: Visiting a maliciously crafted website may lead to arbitrary code execution CVE-2016-4655: An application may be able to disclose kernel memory CVE-2016-4656: An application may be able to execute arbitrary code with kernel privileges Once we confirmed the presence of what appeared to be iOS zero-days, Citizen Lab and Lookout quickly initiated a responsible disclosure process by notifying Apple and sharing our findings. Apple responded promptly, and notified us that they would be addressing the vulnerabilities. We are releasing this report to coincide with the availability of the iOS 9.3.5 patch, which blocks the Trident exploit chain by closing the vulnerabilities that NSO Group appears to have exploited and sold to remotely compromise iPhones. Recent Citizen Lab research has shown that many state-sponsored spyware campaigns against civil society groups and human rights defenders use “just enough” technical sophistication, coupled with carefully planned deception. This case demonstrates that not all threats follow this pattern.  The iPhone has a well-deserved reputation for security.  As the iPhone platform is tightly controlled by Apple, technically sophisticated exploits are often required to enable the remote installation and operation of iPhone monitoring tools. These exploits are rare and expensive. Firms that specialize in acquiring zero-days often pay handsomely for iPhone exploits.  One such firm, Zerodium, acquired an exploit chain similar to the Trident for one million dollars in November 2015. The high cost of iPhone zero-days, the apparent use of NSO Group’s government-exclusive Pegasus product, and prior known targeting of Mansoor by the UAE government provide indicators that point to the UAE government as the likely operator behind the targeting. Remarkably, this case marks the third commercial “lawful intercept” spyware suite employed in attempts to compromise Mansoor.  In 2011, he was targeted with FinFisher’s FinSpy spyware, and in 2012 he was targeted with Hacking Team’s Remote Control System.  Both Hacking Team and FinFisher have been the object of several years of revelations highlighting the misuse of spyware to compromise civil society groups, journalists, and human rights workers.
Paul Merrell

South Korea considers shutting down domestic cryptocurrency exchanges - 0 views

  • The request is one of the first responses from the U.S. Congress to the disclosure earlier this month by security researchers of the two major flaws, which may allow hackers to steal passwords or encryption keys on most types of computers, phones and cloud-based servers. McNerney, a member of the House Energy and Commerce Committee, asked the companies to explain the scope of Spectre and Meltdown, their timeframe for understanding the vulnerabilities, how consumers are affected and whether the flaws have been exploited, among other questions.
Paul Merrell

Net neutrality comment fraud will be investigated by government | Ars Technica - 0 views

  • The US Government Accountability Office (GAO) will investigate the use of impersonation in public comments on the Federal Communications Commission's net neutrality repeal. Congressional Democrats requested the investigation last month, and the GAO has granted the request. While the investigation request was spurred by widespread fraud in the FCC's net neutrality repeal docket, Democrats asked the GAO to also "examine whether this shady practice extends to other agency rulemaking processes." The GAO will do just that, having told Democrats in a letter that it will "review the extent and pervasiveness of fraud and the misuse of American identities during federal rulemaking processes."
  • The GAO provides independent, nonpartisan audits and investigations for Congress. The GAO previously agreed to investigate DDoS attacks that allegedly targeted the FCC comment system, also in response to a request by Democratic lawmakers. The Democrats charged that Chairman Ajit Pai's FCC did not provide enough evidence that the attacks actually happened, and they asked the GAO to find out what evidence the FCC used to make its determination. Democrats also asked the GAO to examine whether the FCC is prepared to prevent future attacks. The DDoS investigation should happen sooner than the new one on comment fraud because the GAO accepted that request in October.
  • The FCC's net neutrality repeal received more than 22 million comments, but millions were apparently submitted by bots and falsely attributed to real Americans (including some dead ones) who didn't actually submit comments. Various analyses confirmed the widespread spam and fraud; one analysis found that 98.5 percent of unique comments opposed the repeal plan.
  • ...1 more annotation...
  • The FCC's comment system makes no attempt to verify submitters' identities, and allows bulk uploads so that groups collecting signatures for letters and petitions can get them on the docket easily. It was like that even before Pai took over as chair, but the fraud became far more pervasive in the proceeding that led to the repeal of net neutrality rules. Pai's FCC did not remove any fraudulent comments from the record. Democratic FCC Commissioner Jessica Rosenworcel called for a delay in the net neutrality repeal vote because of the fraud, but the Republican majority pushed the vote through as scheduled last month. New York Attorney General Eric Schneiderman has been investigating the comment fraud and says the FCC has stonewalled the investigation by refusing to provide evidence. Schneiderman is also leading a lawsuit to reverse the FCC's net neutrality repeal, and the comment fraud could play a role in the case. "We understand that the FCC's rulemaking process requires it to address all comments it receives, regardless of who submits them," Congressional Democrats said in their letter requesting a GAO investigation. "However, we do not believe any outside parties should be permitted to generate any comments to any federal governmental entity using information it knows to be false, such as the identities of those submitting the comments."
Paul Merrell

"In 10 Years, the Surveillance Business Model Will Have Been Made Illegal" - - 1 views

  • The opening panel of the Stigler Center’s annual antitrust conference discussed the source of digital platforms’ power and what, if anything, can be done to address the numerous challenges their ability to shape opinions and outcomes present. 
  • Google CEO Sundar Pichai caused a worldwide sensation earlier this week when he unveiled Duplex, an AI-driven digital assistant able to mimic human speech patterns (complete with vocal tics) to such a convincing degree that it managed to have real conversations with ordinary people without them realizing they were actually talking to a robot.   While Google presented Duplex as an exciting technological breakthrough, others saw something else: a system able to deceive people into believing they were talking to a human being, an ethical red flag (and a surefire way to get to robocall hell). Following the backlash, Google announced on Thursday that the new service will be designed “with disclosure built-in.” Nevertheless, the episode created the impression that ethical concerns were an “after-the-fact consideration” for Google, despite the fierce public scrutiny it and other tech giants faced over the past two months. “Silicon Valley is ethically lost, rudderless and has not learned a thing,” tweeted Zeynep Tufekci, a professor at the University of North Carolina at Chapel Hill and a prominent critic of tech firms.   The controversial demonstration was not the only sign that the global outrage has yet to inspire the profound rethinking critics hoped it would bring to Silicon Valley firms. In Pichai’s speech at Google’s annual I/O developer conference, the ethical concerns regarding the company’s data mining, business model, and political influence were briefly addressed with a general, laconic statement: “The path ahead needs to be navigated carefully and deliberately and we feel a deep sense of responsibility to get this right.”
  • Google’s fellow FAANGs also seem eager to put the “techlash” of the past two years behind them. Facebook, its shares now fully recovered from the Cambridge Analytica scandal, is already charging full-steam ahead into new areas like dating and blockchain.   But the techlash likely isn’t going away soon. The rise of digital platforms has had profound political, economic, and social effects, many of which are only now becoming apparent, and their sheer size and power makes it virtually impossible to exist on the Internet without using their services. As Stratechery’s Ben Thompson noted in the opening panel of the Stigler Center’s annual antitrust conference last month, Google and Facebook—already dominating search and social media and enjoying a duopoly in digital advertising—own many of the world’s top mobile apps. Amazon has more than 100 million Prime members, for whom it is usually the first and last stop for shopping online.   Many of the mechanisms that allowed for this growth are opaque and rooted in manipulation. What are those mechanisms, and how should policymakers and antitrust enforcers address them? These questions, and others, were the focus of the Stigler Center panel, which was moderated by the Economist’s New York bureau chief, Patrick Foulis.
Paul Merrell

"Alarming": Facebook Teams Up With Think-Tank Funded by Saudi Arabia and Military Contr... - 0 views

  • n a new project Facebook insists is a completely objective and nonpartisan effort to root out what it deems "disinformation," the social media giant announced on Thursday that it is partnering with the Atlantic Council—a prominent Washington-based think-tank funded by Saudi Arabia, major oil companies, defense contractors, and Charles Koch—to prevent its platform from "being abused during elections." "This is alarming," independent journalist Rania Khalek concluded in a tweet on Thursday. "The Atlantic Council—which is funded by gulf monarchies, western governments, NATO, oil and weapons companies, etc.—will now assist Facebook in suppressing what they decide is disinformation." According to its statement announcing the initiative, Facebook will "use the Atlantic Council's Digital Research Unit Monitoring Missions during elections and other highly sensitive moments."
  • While Facebook's statement fawned over the Atlantic Council's "stellar reputation," critics argued that the organization's reliance on donations from foreign oil monarchies and American plutocrats puts the lie to the project's stated mission of shielding the democratic process from manipulation and abuse. "Monopoly social media corporations teaming up with [the] pro-U.S. NatSec blob to determine truth was always the logical end of 'fake news' panic," Adam Johnson, a contributor at Fairness and Accuracy in Reporting (FAIR), argued on Twitter in response to Facebook's announcement.
  • According to a New York Times report from 2014, the Atlantic Council has received donations from at least 25 foreign nations since 2008, including the United Kingdom, Qatar, the United Arab Emirates, and Saudi Arabia.
Paul Merrell

Superiority in Cyberspace Will Remain Elusive - Federation Of American Scientists - 0 views

  • Military planners should not anticipate that the United States will ever dominate cyberspace, the Joint Chiefs of Staff said in a new doctrinal publication. The kind of supremacy that might be achievable in other domains is not a realistic option in cyber operations. “Permanent global cyberspace superiority is not possible due to the complexity of cyberspace,” the DoD publication said. In fact, “Even local superiority may be impractical due to the way IT [information technology] is implemented; the fact US and other national governments do not directly control large, privately owned portions of cyberspace; the broad array of state and non-state actors; the low cost of entry; and the rapid and unpredictable proliferation of technology.” Nevertheless, the military has to make do under all circumstances. “Commanders should be prepared to conduct operations under degraded conditions in cyberspace.” This sober assessment appeared in a new edition of Joint Publication 3-12, Cyberspace Operations, dated June 8, 2018. (The 100-page document updates and replaces a 70-page version from 2013.) The updated DoD doctrine presents a cyber concept of operations, describes the organization of cyber forces, outlines areas of responsibility, and defines limits on military action in cyberspace, including legal limits.
  • The new cyber doctrine reiterates the importance and the difficulty of properly attributing cyber attacks against the US to their source. “The ability to hide the sponsor and/or the threat behind a particular malicious effect in cyberspace makes it difficult to determine how, when, and where to respond,” the document said. “The design of the Internet lends itself to anonymity and, combined with applications intended to hide the identity of users, attribution will continue to be a challenge for the foreseeable future.”
Paul Merrell

India begins to embrace digital privacy. - 0 views

  • India is the world’s largest democracy and is home to 13.5 percent of the world’s internet users. So the Indian Supreme Court’s August ruling that privacy is a fundamental, constitutional right for all of the country’s 1.32 billion citizens was momentous. But now, close to three months later, it’s still unclear exactly how the decision will be implemented. Will it change everything for internet users? Or will the status quo remain? The most immediate consequence of the ruling is that tech companies such as Facebook, Twitter, Google, and Alibaba will be required to rein in their collection, utilization, and sharing of Indian user data. But the changes could go well beyond technology. If implemented properly, the decision could affect national politics, business, free speech, and society. It could encourage the country to continue to make large strides toward increased corporate and governmental transparency, stronger consumer confidence, and the establishment and growth of the Indian “individual” as opposed to the Indian collective identity. But that’s a pretty big if. Advertisement The privacy debate in India was in many ways sparked by a controversy that has shaken up the landscape of national politics for several months. It began in 2016 as a debate around a social security program that requires participating citizens to obtain biometric, or Aadhaar, cards. Each card has a unique 12-digit number and records an individual’s fingerprints and irises in order to confirm his or her identity. The program was devised to increase the ease with which citizens could receive social benefits and avoid instances of fraud. Over time, Aadhaar cards have become mandatory for integral tasks such as opening bank accounts, buying and selling property, and filing tax returns, much to the chagrin of citizens who are uncomfortable about handing over their personal data. Before the ruling, India had weak privacy protections in place, enabling unchecked data collection on citizens by private companies and the government. Over the past year, a number of large-scale data leaks and breaches that have impacted major Indian corporations, as well as the Aadhaar program itself, have prompted users to start asking questions about the security and uses of their personal data.
  • n order to bolster the ruling the government will also be introducing a set of data protection laws that are to be developed by a committee led by retired Supreme Court judge B.N. Srikrishna. The committee will study the data protection landscape, develop a draft Data Protection Bill, and identify how, and whether, the Aadhaar Act should be amended based on the privacy ruling.
  • Should the data protection laws be implemented in an enforceable manner, the ruling will significantly impact the business landscape in India. Since the election of Prime Minister Narendra Modi in May 2014, the government has made fostering and expanding the technology and startup sector a top priority. The startup scene has grown, giving rise to several promising e-commerce companies, but in 2014, only 12 percent of India’s internet users were online consumers. If the new data protection laws are truly impactful, companies will have to accept responsibility for collecting, utilizing, and protecting user data safely and fairly. Users would also have a stronger form of redress when their newly recognized rights are violated, which could transform how they engage with technology. This has the potential to not only increase consumer confidence but revitalize the Indian business sector, as it makes it more amenable and friendly to outside investors, users, and collaborators.
« First ‹ Previous 141 - 160 of 169 Next ›
Showing 20 items per page