Skip to main content

Home/ Future of the Web/ Group items tagged Code

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

Midori in Launchpad - 0 views

  •  
    [ # Join #midori on irc.freenode.net for discussions about bugs and development. Project statistics: https://www.ohloh.net/p/midori # Midori is a fast and lightweight web browser that uses the WebKit rendering engine and the GTK+ interface. Midori is a fast little WebKit browser with support for HTML5. It can manage many open tabs and windows. The URL bar completes history, bookmarks, search engines and open tabs out of the box. Web developers can use the powerful web inspector that is a part of WebKit. Individual pages can easily be turned into web apps and new profiles can be created on demand. A number of extensions are included by default: * Adblock with support for ABP filter lists and custom rules is built-in. * You can download files with Aria2 or SteadyFlow. * User scripts and styles support a la Greasemonkey. * Managing cookies and scripts via NoJS and Cookie Security Manager. * Switching open tabs in a vertical panel or a popup window.]
Gonzalo San Gil, PhD.

How 2 Legal Cases May Decide the Future of Open Source Software | Network World [# ! Pe... - 0 views

    • Gonzalo San Gil, PhD.
       
      # ! This is: The 'Problem' is not 'Open Source' but # ! 'Those' who do a bad use...
    • Gonzalo San Gil, PhD.
       
      # ! The Attacks on Open Source continue... # ! wonder why... and take part for The Freedom.
  •  
    [ The open source universe may soon be less collaborative and more litigious. Two cases now in the courts could open the legal floodgates. By Paul Rubens Follow CIO | Mar 6, 2015 6:00 AM PT ...]
Paul Merrell

What the Hack! 56 Suspected Hackers arrested in the UK | nsnbc international - 0 views

  • The UK National Crime Agency arrested 56 suspected hackers, including one 23-year-old male who allegedly attempted to hack his way into the U.S.’ Department of Defense in 2014. Not attempting to minimize the potential risks of hacking but how much does cyber-crime actually cost, what are the risks and what about those who hack the data of billions of internet users per day to, allegedly, “keep all of us safe?”
  • Besides the 23-year-old who allegedly attempted to hack his way into the a U.S. Department of Defense site, the other detainees allegedly were members of the hacking collectives Lizard Squad and D33DS which are being accused of fraud, money laundering and Denial of Service and Distributed Denial of Service (DOS & DDOS) attacks.  D33DS stands accused of having stolen data of some 450,000 Yahoo users. The arrests followed the recent announcement about the so-called FREAK security vulnerability that was leaving thousands of SSL sites unprotected. The arrest of the 56 hackers in the UK was reported as the National Crime Agency’s way of “sending a clear message” to the hacker community.
  • The U.S. DoD’s cyber-security functioned, obviously. A recent article by Benjamin Dean entitled “Hard Evidence: How much is cybercrime really costing us” suggests that the money spent on cyber-security per year is disproportional to the harm that is being caused by cyber-crime. Dean, who is a Fellow for Internet Governance and Cyber-security at the School of International and Public Affairs at Columbia University would conclude that: There are numerous competing budgetary priorities at any one time and limited funds to spend on meeting all these needs. How much money does it make sense to invest in bolstering cybersecurity, relative to the losses? …In the hysteria created in the wake of the hacks of 2014, we risk making the wrong choice simply because we don’t know what the current sums of money are being spent on.
  • ...1 more annotation...
  • Meanwhile, NSA whitleblower Edward Snowden (think about him what you want), would reveal that the NSA and the GCHQ hacked themselves into the possession of the encryption codes of the world’s largest SIM card manufacturer Gemalto. Snowden’s revelations about the NSA’s PRISM surveillance program wouldn’t come as a surprise to those who have known about the United States’ and allies mutual spying network Echelon for decades.
Gonzalo San Gil, PhD.

Hollywood vs. Silicon Valley: Call in the Celebrities! Or Not. | Re/code - 0 views

  •  
    "By now, it has become an old meme: Hollywood executives and Silicon Valley entrepreneurs have always struggled a little to get along"
Paul Merrell

The Digital Hunt for Duqu, a Dangerous and Cunning U.S.-Israeli Spy Virus - The Intercept - 1 views

  • “Is this related to what we talked about before?” Bencsáth said, referring to a previous discussion they’d had about testing new services the company planned to offer customers. “No, something else,” Bartos said. “Can you come now? It’s important. But don’t tell anyone where you’re going.” Bencsáth wolfed down the rest of his lunch and told his colleagues in the lab that he had a “red alert” and had to go. “Don’t ask,” he said as he ran out the door. A while later, he was at Bartos’ office, where a triage team had been assembled to address the problem they wanted to discuss. “We think we’ve been hacked,” Bartos said.
  • They found a suspicious file on a developer’s machine that had been created late at night when no one was working. The file was encrypted and compressed so they had no idea what was inside, but they suspected it was data the attackers had copied from the machine and planned to retrieve later. A search of the company’s network found a few more machines that had been infected as well. The triage team felt confident they had contained the attack but wanted Bencsáth’s help determining how the intruders had broken in and what they were after. The company had all the right protections in place—firewalls, antivirus, intrusion-detection and -prevention systems—and still the attackers got in.
  • Bencsáth was a teacher, not a malware hunter, and had never done such forensic work before. At the CrySyS Lab, where he was one of four advisers working with a handful of grad students, he did academic research for the European Union and occasional hands-on consulting work for other clients, but the latter was mostly run-of-the-mill cleanup work—mopping up and restoring systems after random virus infections. He’d never investigated a targeted hack before, let alone one that was still live, and was thrilled to have the chance. The only catch was, he couldn’t tell anyone what he was doing. Bartos’ company depended on the trust of customers, and if word got out that the company had been hacked, they could lose clients. The triage team had taken mirror images of the infected hard drives, so they and Bencsáth spent the rest of the afternoon poring over the copies in search of anything suspicious. By the end of the day, they’d found what they were looking for—an “infostealer” string of code that was designed to record passwords and other keystrokes on infected machines, as well as steal documents and take screenshots. It also catalogued any devices or systems that were connected to the machines so the attackers could build a blueprint of the company’s network architecture. The malware didn’t immediately siphon the stolen data from infected machines but instead stored it in a temporary file, like the one the triage team had found. The file grew fatter each time the infostealer sucked up data, until at some point the attackers would reach out to the machine to retrieve it from a server in India that served as a command-and-control node for the malware.
  • ...1 more annotation...
  • Bencsáth took the mirror images and the company’s system logs with him, after they had been scrubbed of any sensitive customer data, and over the next few days scoured them for more malicious files, all the while being coy to his colleagues back at the lab about what he was doing. The triage team worked in parallel, and after several more days they had uncovered three additional suspicious files. When Bencsáth examined one of them—a kernel-mode driver, a program that helps the computer communicate with devices such as printers—his heart quickened. It was signed with a valid digital certificate from a company in Taiwan (digital certificates are documents ensuring that a piece of software is legitimate). Wait a minute, he thought. Stuxnet—the cyberweapon that was unleashed on Iran’s uranium-enrichment program—also used a driver that was signed with a certificate from a company in Taiwan. That one came from RealTek Semiconductor, but this certificate belonged to a different company, C-Media Electronics. The driver had been signed with the certificate in August 2009, around the same time Stuxnet had been unleashed on machines in Iran.
Paul Merrell

U.S. Embedded Spyware Overseas, Report Claims - NYTimes.com - 0 views

  • The United States has found a way to permanently embed surveillance and sabotage tools in computers and networks it has targeted in Iran, Russia, Pakistan, China, Afghanistan and other countries closely watched by American intelligence agencies, according to a Russian cybersecurity firm.In a presentation of its findings at a conference in Mexico on Monday, Kaspersky Lab, the Russian firm, said that the implants had been placed by what it called the “Equation Group,” which appears to be a veiled reference to the National Security Agency and its military counterpart, United States Cyber Command.
  • It linked the techniques to those used in Stuxnet, the computer worm that disabled about 1,000 centrifuges in Iran’s nuclear enrichment program. It was later revealed that Stuxnet was part of a program code-named Olympic Games and run jointly by Israel and the United States.Kaspersky’s report said that Olympic Games had similarities to a much broader effort to infect computers well beyond those in Iran. It detected particularly high infection rates in computers in Iran, Pakistan and Russia, three countries whose nuclear programs the United States routinely monitors.
  • Some of the implants burrow so deep into the computer systems, Kaspersky said, that they infect the “firmware,” the embedded software that preps the computer’s hardware before the operating system starts. It is beyond the reach of existing antivirus products and most security controls, Kaspersky reported, making it virtually impossible to wipe out.
  • ...1 more annotation...
  • In many cases, it also allows the American intelligence agencies to grab the encryption keys off a machine, unnoticed, and unlock scrambled contents. Moreover, many of the tools are designed to run on computers that are disconnected from the Internet, which was the case in the computers controlling Iran’s nuclear enrichment plants.
Paul Merrell

Free At Last: New DMCA Rules Might Make the Web a Better Place | nsnbc international - 0 views

  • David Mao, the Librarian of Congress, has issued new rules pertaining to exemptions to the Digital Millennium Copyright Act (DMCA) after a 3 year battle that was expedited in the wake of the Volkswagen scandal.
  • Opposition to this new decision is coming from the Environmental Protection Agency (EPA) and the auto industry because the DMCA prohibits “circumventing encryption or access controls to copy or modify copyrighted works.” For example, GM “claimed the exemption ‘could introduce safety and security issues as well as facilitate violation of various laws designed specifically to regulate the modern car, including emissions, fuel economy, and vehicle safety regulations’.” The exemption in question is in Section 1201 which forbids the unlocking of software access controls which has given the auto industry the unique ability to “threaten legal action against anyone who needs to get around those restrictions, no matter how legitimate the reason.” Journalist Nick Statt points out that this provision “made it illegal in the past to unlock your smartphone from its carrier or even to share your HBO Go password with a friend. It’s designed to let corporations protect copyrighted material, but it allows them to crackdown on circumventions even when they’re not infringing on those copyrights or trying to access or steal proprietary information.”
  • Kit Walsh, staff attorney for the Electronic Frontier Foundation (EFF), explained that the “‘access control’ rule is supposed to protect against unlawful copying. But as we’ve seen in the recent Volkswagen scandal—where VW was caught manipulating smog tests—it can be used instead to hide wrongdoing hidden in computer code.” Walsh continued: “We are pleased that analysts will now be able to examine the software in the cars we drive without facing legal threats from car manufacturers, and that the Librarian has acted to promote competition in the vehicle aftermarket and protect the long tradition of vehicle owners tinkering with their cars and tractors. The year-long delay in implementing the exemptions, though, is disappointing and unjustified. The VW smog tests and a long run of security vulnerabilities have shown researchers and drivers need the exemptions now.” As part of the new changes, gamers can “modify an old video game so it doesn’t perform a check with an authentication server that has since been shut down” and after the publisher cuts of support for the video game.
  • ...1 more annotation...
  • Another positive from the change is that smartphone users will be able to jailbreak their phone and finally enjoy running operating systems and applications from any source, not just those approved by the manufacturer. And finally, those who remix excerpts from DVDs, Blu – Ray discs or downloading services will be allowed to mix the material into theirs without violating the DMCA.
Paul Merrell

Cy Vance's Proposal to Backdoor Encrypted Devices Is Riddled With Vulnerabilities | Jus... - 0 views

  • Less than a week after the attacks in Paris — while the public and policymakers were still reeling, and the investigation had barely gotten off the ground — Cy Vance, Manhattan’s District Attorney, released a policy paper calling for legislation requiring companies to provide the government with backdoor access to their smartphones and other mobile devices. This is the first concrete proposal of this type since September 2014, when FBI Director James Comey reignited the “Crypto Wars” in response to Apple’s and Google’s decisions to use default encryption on their smartphones. Though Comey seized on Apple’s and Google’s decisions to encrypt their devices by default, his concerns are primarily related to end-to-end encryption, which protects communications that are in transit. Vance’s proposal, on the other hand, is only concerned with device encryption, which protects data stored on phones. It is still unclear whether encryption played any role in the Paris attacks, though we do know that the attackers were using unencrypted SMS text messages on the night of the attack, and that some of them were even known to intelligence agencies and had previously been under surveillance. But regardless of whether encryption was used at some point during the planning of the attacks, as I lay out below, prohibiting companies from selling encrypted devices would not prevent criminals or terrorists from being able to access unbreakable encryption. Vance’s primary complaint is that Apple’s and Google’s decisions to provide their customers with more secure devices through encryption interferes with criminal investigations. He claims encryption prevents law enforcement from accessing stored data like iMessages, photos and videos, Internet search histories, and third party app data. He makes several arguments to justify his proposal to build backdoors into encrypted smartphones, but none of them hold water.
  • Before addressing the major privacy, security, and implementation concerns that his proposal raises, it is worth noting that while an increase in use of fully encrypted devices could interfere with some law enforcement investigations, it will help prevent far more crimes — especially smartphone theft, and the consequent potential for identity theft. According to Consumer Reports, in 2014 there were more than two million victims of smartphone theft, and nearly two-thirds of all smartphone users either took no steps to secure their phones or their data or failed to implement passcode access for their phones. Default encryption could reduce instances of theft because perpetrators would no longer be able to break into the phone to steal the data.
  • Vance argues that creating a weakness in encryption to allow law enforcement to access data stored on devices does not raise serious concerns for security and privacy, since in order to exploit the vulnerability one would need access to the actual device. He considers this an acceptable risk, claiming it would not be the same as creating a widespread vulnerability in encryption protecting communications in transit (like emails), and that it would be cheap and easy for companies to implement. But Vance seems to be underestimating the risks involved with his plan. It is increasingly important that smartphones and other devices are protected by the strongest encryption possible. Our devices and the apps on them contain astonishing amounts of personal information, so much that an unprecedented level of harm could be caused if a smartphone or device with an exploitable vulnerability is stolen, not least in the forms of identity fraud and credit card theft. We bank on our phones, and have access to credit card payments with services like Apple Pay. Our contact lists are stored on our phones, including phone numbers, emails, social media accounts, and addresses. Passwords are often stored on people’s phones. And phones and apps are often full of personal details about their lives, from food diaries to logs of favorite places to personal photographs. Symantec conducted a study, where the company spread 50 “lost” phones in public to see what people who picked up the phones would do with them. The company found that 95 percent of those people tried to access the phone, and while nearly 90 percent tried to access private information stored on the phone or in other private accounts such as banking services and email, only 50 percent attempted contacting the owner.
  • ...8 more annotations...
  • Vance attempts to downplay this serious risk by asserting that anyone can use the “Find My Phone” or Android Device Manager services that allow owners to delete the data on their phones if stolen. However, this does not stand up to scrutiny. These services are effective only when an owner realizes their phone is missing and can take swift action on another computer or device. This delay ensures some period of vulnerability. Encryption, on the other hand, protects everyone immediately and always. Additionally, Vance argues that it is safer to build backdoors into encrypted devices than it is to do so for encrypted communications in transit. It is true that there is a difference in the threats posed by the two types of encryption backdoors that are being debated. However, some manner of widespread vulnerability will inevitably result from a backdoor to encrypted devices. Indeed, the NSA and GCHQ reportedly hacked into a database to obtain cell phone SIM card encryption keys in order defeat the security protecting users’ communications and activities and to conduct surveillance. Clearly, the reality is that the threat of such a breach, whether from a hacker or a nation state actor, is very real. Even if companies go the extra mile and create a different means of access for every phone, such as a separate access key for each phone, significant vulnerabilities will be created. It would still be possible for a malicious actor to gain access to the database containing those keys, which would enable them to defeat the encryption on any smartphone they took possession of. Additionally, the cost of implementation and maintenance of such a complex system could be high.
  • Privacy is another concern that Vance dismisses too easily. Despite Vance’s arguments otherwise, building backdoors into device encryption undermines privacy. Our government does not impose a similar requirement in any other context. Police can enter homes with warrants, but there is no requirement that people record their conversations and interactions just in case they someday become useful in an investigation. The conversations that we once had through disposable letters and in-person conversations now happen over the Internet and on phones. Just because the medium has changed does not mean our right to privacy has.
  • In addition to his weak reasoning for why it would be feasible to create backdoors to encrypted devices without creating undue security risks or harming privacy, Vance makes several flawed policy-based arguments in favor of his proposal. He argues that criminals benefit from devices that are protected by strong encryption. That may be true, but strong encryption is also a critical tool used by billions of average people around the world every day to protect their transactions, communications, and private information. Lawyers, doctors, and journalists rely on encryption to protect their clients, patients, and sources. Government officials, from the President to the directors of the NSA and FBI, and members of Congress, depend on strong encryption for cybersecurity and data security. There are far more innocent Americans who benefit from strong encryption than there are criminals who exploit it. Encryption is also essential to our economy. Device manufacturers could suffer major economic losses if they are prohibited from competing with foreign manufacturers who offer more secure devices. Encryption also protects major companies from corporate and nation-state espionage. As more daily business activities are done on smartphones and other devices, they may now hold highly proprietary or sensitive information. Those devices could be targeted even more than they are now if all that has to be done to access that information is to steal an employee’s smartphone and exploit a vulnerability the manufacturer was required to create.
  • Vance also suggests that the US would be justified in creating such a requirement since other Western nations are contemplating requiring encryption backdoors as well. Regardless of whether other countries are debating similar proposals, we cannot afford a race to the bottom on cybersecurity. Heads of the intelligence community regularly warn that cybersecurity is the top threat to our national security. Strong encryption is our best defense against cyber threats, and following in the footsteps of other countries by weakening that critical tool would do incalculable harm. Furthermore, even if the US or other countries did implement such a proposal, criminals could gain access to devices with strong encryption through the black market. Thus, only innocent people would be negatively affected, and some of those innocent people might even become criminals simply by trying to protect their privacy by securing their data and devices. Finally, Vance argues that David Kaye, UN Special Rapporteur for Freedom of Expression and Opinion, supported the idea that court-ordered decryption doesn’t violate human rights, provided certain criteria are met, in his report on the topic. However, in the context of Vance’s proposal, this seems to conflate the concepts of court-ordered decryption and of government-mandated encryption backdoors. The Kaye report was unequivocal about the importance of encryption for free speech and human rights. The report concluded that:
  • States should promote strong encryption and anonymity. National laws should recognize that individuals are free to protect the privacy of their digital communications by using encryption technology and tools that allow anonymity online. … States should not restrict encryption and anonymity, which facilitate and often enable the rights to freedom of opinion and expression. Blanket prohibitions fail to be necessary and proportionate. States should avoid all measures that weaken the security that individuals may enjoy online, such as backdoors, weak encryption standards and key escrows. Additionally, the group of intelligence experts that was hand-picked by the President to issue a report and recommendations on surveillance and technology, concluded that: [R]egarding encryption, the U.S. Government should: (1) fully support and not undermine efforts to create encryption standards; (2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and (3) increase the use of encryption and urge US companies to do so, in order to better protect data in transit, at rest, in the cloud, and in other storage.
  • The clear consensus among human rights experts and several high-ranking intelligence experts, including the former directors of the NSA, Office of the Director of National Intelligence, and DHS, is that mandating encryption backdoors is dangerous. Unaddressed Concerns: Preventing Encrypted Devices from Entering the US and the Slippery Slope In addition to the significant faults in Vance’s arguments in favor of his proposal, he fails to address the question of how such a restriction would be effectively implemented. There is no effective mechanism for preventing code from becoming available for download online, even if it is illegal. One critical issue the Vance proposal fails to address is how the government would prevent, or even identify, encrypted smartphones when individuals bring them into the United States. DHS would have to train customs agents to search the contents of every person’s phone in order to identify whether it is encrypted, and then confiscate the phones that are. Legal and policy considerations aside, this kind of policy is, at the very least, impractical. Preventing strong encryption from entering the US is not like preventing guns or drugs from entering the country — encrypted phones aren’t immediately obvious as is contraband. Millions of people use encrypted devices, and tens of millions more devices are shipped to and sold in the US each year.
  • Finally, there is a real concern that if Vance’s proposal were accepted, it would be the first step down a slippery slope. Right now, his proposal only calls for access to smartphones and devices running mobile operating systems. While this policy in and of itself would cover a number of commonplace devices, it may eventually be expanded to cover laptop and desktop computers, as well as communications in transit. The expansion of this kind of policy is even more worrisome when taking into account the speed at which technology evolves and becomes widely adopted. Ten years ago, the iPhone did not even exist. Who is to say what technology will be commonplace in 10 or 20 years that is not even around today. There is a very real question about how far law enforcement will go to gain access to information. Things that once seemed like merely science fiction, such as wearable technology and artificial intelligence that could be implanted in and work with the human nervous system, are now available. If and when there comes a time when our “smart phone” is not really a device at all, but is rather an implant, surely we would not grant law enforcement access to our minds.
  • Policymakers should dismiss Vance’s proposal to prohibit the use of strong encryption to protect our smartphones and devices in order to ensure law enforcement access. Undermining encryption, regardless of whether it is protecting data in transit or at rest, would take us down a dangerous and harmful path. Instead, law enforcement and the intelligence community should be working to alter their skills and tactics in a fast-evolving technological world so that they are not so dependent on information that will increasingly be protected by encryption.
Paul Merrell

How to Protect Yourself from NSA Attacks on 1024-bit DH | Electronic Frontier Foundation - 0 views

  • In a post on Wednesday, researchers Alex Halderman and Nadia Heninger presented compelling research suggesting that the NSA has developed the capability to decrypt a large number of HTTPS, SSH, and VPN connections using an attack on common implementations of the Diffie-Hellman key exchange algorithm with 1024-bit primes. Earlier in the year, they were part of a research group that published a study of the Logjam attack, which leveraged overlooked and outdated code to enforce "export-grade" (downgraded, 512-bit) parameters for Diffie-Hellman. By performing a cost analysis of the algorithm with stronger 1024-bit parameters and comparing that with what we know of the NSA "black budget" (and reading between the lines of several leaked documents about NSA interception capabilities) they concluded that it's likely NSA has been breaking 1024-bit Diffie-Hellman for some time now. The good news is, in the time since this research was originally published, the major browser vendors (IE, Chrome, and Firefox) have removed support for 512-bit Diffie-Hellman, addressing the biggest vulnerability. However, 1024-bit Diffie-Hellman remains supported for the forseeable future despite its vulnerability to NSA surveillance. In this post, we present some practical tips to protect yourself from the surveillance machine, whether you're using a web browser, an SSH client, or VPN software. Disclaimer: This is not a complete guide, and not all software is covered.
Paul Merrell

EFF Pries More Information on Zero Days from the Government's Grasp | Electronic Fronti... - 0 views

  • Until just last week, the U.S. government kept up the charade that its use of a stockpile of security vulnerabilities for hacking was a closely held secret.1 In fact, in response to EFF’s FOIA suit to get access to the official U.S. policy on zero days, the government redacted every single reference to “offensive” use of vulnerabilities. To add insult to injury, the government’s claim was that even admitting to offensive use would cause damage to national security. Now, in the face of EFF’s brief marshaling overwhelming evidence to the contrary, the charade is over. In response to EFF’s motion for summary judgment, the government has disclosed a new version of the Vulnerabilities Equities Process, minus many of the worst redactions. First and foremost, it now admits that the “discovery of vulnerabilities in commercial information technology may present competing ‘equities’ for the [government’s] offensive and defensive mission.” That might seem painfully obvious—a flaw or backdoor in a Juniper router is dangerous for anyone running a network, whether that network is in the U.S. or Iran. But the government’s failure to adequately weigh these “competing equities” was so severe that in 2013 a group of experts appointed by President Obama recommended that the policy favor disclosure “in almost all instances for widely used code.” [.pdf].
  • The newly disclosed version of the Vulnerabilities Equities Process (VEP) also officially confirms what everyone already knew: the use of zero days isn’t confined to the spies. Rather, the policy states that the “law enforcement community may want to use information pertaining to a vulnerability for similar offensive or defensive purposes but for the ultimate end of law enforcement.” Similarly it explains that “counterintelligence equities can be defensive, offensive, and/or law enforcement-related” and may “also have prosecutorial responsibilities.” Given that the government is currently prosecuting users for committing crimes over Tor hidden services, and that it identified these individuals using vulnerabilities called a “Network Investigative Technique”, this too doesn’t exactly come as a shocker. Just a few weeks ago, the government swore that even acknowledging the mere fact that it uses vulnerabilities offensively “could be expected to cause serious damage to the national security.” That’s a standard move in FOIA cases involving classified information, even though the government unnecessarily classifies documents at an astounding rate. In this case, the government relented only after nearly a year and a half of litigation by EFF. The government would be well advised to stop relying on such weak secrecy claims—it only risks undermining its own credibility.
  • The new version of the VEP also reveals significantly more information about the general process the government follows when a vulnerability is identified. In a nutshell, an agency that discovers a zero day is responsible for invoking the VEP, which then provides for centralized coordination and weighing of equities among all affected agencies. Along with a declaration from an official at the Office of the Director of National Intelligence, this new information provides more background on the reasons why the government decided to develop an overarching zero day policy in the first place: it “recognized that not all organizations see the entire picture of vulnerabilities, and each organization may have its own equities and concerns regarding the prioritization of patches and fixes, as well as its own distinct mission obligations.” We now know the VEP was finalized in February 2010, but the government apparently failed to implement it in any substantial way, prompting the presidential review group’s recommendation to prioritize disclosure over offensive hacking. We’re glad to have forced a little more transparency on this important issue, but the government is still foolishly holding on to a few last redactions, including refusing to name which agencies participate in the VEP. That’s just not supportable, and we’ll be in court next month to argue that the names of these agencies must be disclosed. 
Paul Merrell

Microsoft Pitches Technology That Can Read Facial Expressions at Political Rallies - 1 views

  • On the 21st floor of a high-rise hotel in Cleveland, in a room full of political operatives, Microsoft’s Research Division was advertising a technology that could read each facial expression in a massive crowd, analyze the emotions, and report back in real time. “You could use this at a Trump rally,” a sales representative told me. At both the Republican and Democratic conventions, Microsoft sponsored event spaces for the news outlet Politico. Politico, in turn, hosted a series of Microsoft-sponsored discussions about the use of data technology in political campaigns. And throughout Politico’s spaces in both Philadelphia and Cleveland, Microsoft advertised an array of products from “Microsoft Cognitive Services,” its artificial intelligence and cloud computing division. At one exhibit, titled “Realtime Crowd Insights,” a small camera scanned the room, while a monitor displayed the captured image. Every five seconds, a new image would appear with data annotated for each face — an assigned serial number, gender, estimated age, and any emotions detected in the facial expression. When I approached, the machine labeled me “b2ff” and correctly identified me as a 23-year-old male.
  • “Realtime Crowd Insights” is an Application Programming Interface (API), or a software tool that connects web applications to Microsoft’s cloud computing services. Through Microsoft’s emotional analysis API — a component of Realtime Crowd Insights — applications send an image to Microsoft’s servers. Microsoft’s servers then analyze the faces and return emotional profiles for each one. In a November blog post, Microsoft said that the emotional analysis could detect “anger, contempt, fear, disgust, happiness, neutral, sadness or surprise.” Microsoft’s sales representatives told me that political campaigns could use the technology to measure the emotional impact of different talking points — and political scientists could use it to study crowd response at rallies.
  • Facial recognition technology — the identification of faces by name — is already widely used in secret by law enforcement, sports stadiums, retail stores, and even churches, despite being of questionable legality. As early as 2002, facial recognition technology was used at the Super Bowl to cross-reference the 100,000 attendees to a database of the faces of known criminals. The technology is controversial enough that in 2013, Google tried to ban the use of facial recognition apps in its Google glass system. But “Realtime Crowd Insights” is not true facial recognition — it could not identify me by name, only as “b2ff.” It did, however, store enough data on each face that it could continuously identify it with the same serial number, even hours later. The display demonstrated that capability by distinguishing between the number of total faces it had seen, and the number of unique serial numbers. Photo: Alex Emmons
  • ...2 more annotations...
  • Instead, “Realtime Crowd Insights” is an example of facial characterization technology — where computers analyze faces without necessarily identifying them. Facial characterization has many positive applications — it has been tested in the classroom, as a tool for spotting struggling students, and Microsoft has boasted that the tool will even help blind people read the faces around them. But facial characterization can also be used to assemble and store large profiles of information on individuals, even anonymously.
  • Alvaro Bedoya, a professor at Georgetown Law School and expert on privacy and facial recognition, has hailed that code of conduct as evidence that Microsoft is trying to do the right thing. But he pointed out that it leaves a number of questions unanswered — as illustrated in Cleveland and Philadelphia. “It’s interesting that the app being shown at the convention ‘remembered’ the faces of the people who walked by. That would seem to suggest that their faces were being stored and processed without the consent that Microsoft’s policy requires,” Bedoya said. “You have to wonder: What happened to the face templates of the people who walked by that booth? Were they deleted? Or are they still in the system?” Microsoft officials declined to comment on exactly what information is collected on each face and what data is retained or stored, instead referring me to their privacy policy, which does not address the question. Bedoya also pointed out that Microsoft’s marketing did not seem to match the consent policy. “It’s difficult to envision how companies will obtain consent from people in large crowds or rallies.”
  •  
    But nobody is saying that the output of this technology can't be combined with the output of facial recognition technology to let them monitor you individually AND track your emotions. Fortunately, others are fighting back with knowledge and tech to block facial recognition. http://goo.gl/JMQM2W
Paul Merrell

Forget About Siri and Alexa - When It Comes to Voice Identification, the "NSA Reigns Su... - 0 views

  • These and other classified documents provided by former NSA contractor Edward Snowden reveal that the NSA has developed technology not just to record and transcribe private conversations but to automatically identify the speakers. Americans most regularly encounter this technology, known as speaker recognition, or speaker identification, when they wake up Amazon’s Alexa or call their bank. But a decade before voice commands like “Hello Siri” and “OK Google” became common household phrases, the NSA was using speaker recognition to monitor terrorists, politicians, drug lords, spies, and even agency employees. The technology works by analyzing the physical and behavioral features that make each person’s voice distinctive, such as the pitch, shape of the mouth, and length of the larynx. An algorithm then creates a dynamic computer model of the individual’s vocal characteristics. This is what’s popularly referred to as a “voiceprint.” The entire process — capturing a few spoken words, turning those words into a voiceprint, and comparing that representation to other “voiceprints” already stored in the database — can happen almost instantaneously. Although the NSA is known to rely on finger and face prints to identify targets, voiceprints, according to a 2008 agency document, are “where NSA reigns supreme.” It’s not difficult to see why. By intercepting and recording millions of overseas telephone conversations, video teleconferences, and internet calls — in addition to capturing, with or without warrants, the domestic conversations of Americans — the NSA has built an unrivaled collection of distinct voices. Documents from the Snowden archive reveal that analysts fed some of these recordings to speaker recognition algorithms that could connect individuals to their past utterances, even when they had used unknown phone numbers, secret code words, or multiple languages.
  • The classified documents, dating from 2004 to 2012, show the NSA refining increasingly sophisticated iterations of its speaker recognition technology. They confirm the uses of speaker recognition in counterterrorism operations and overseas drug busts. And they suggest that the agency planned to deploy the technology not just to retroactively identify spies like Pelton but to prevent whistleblowers like Snowden.
Paul Merrell

Judge "Disturbed" To Learn Google Tracks 'Incognito' Users, Demands Answers | ZeroHedge - 1 views

  • A US District Judge in San Jose, California says she was "disturbed" over Google's data collection practices, after learning that the company still collects and uses data from users in its Chrome browser's so-called 'incognito' mode - and has demanded an explanation "about what exactly Google does," according to Bloomberg.
  • In a class-action lawsuit that describes the company's private browsing claims as a "ruse" - and "seeks $5,000 in damages for each of the millions of people whose privacy has been compromised since June of 2016," US District Judge Lucy Koh said she finds it "unusual" that the company would make the "extra effort" to gather user data if it doesn't actually use the information for targeted advertising or to build user profiles.Koh has a long history with the Alphabet Inc. subsidiary, previously forcing the Mountain View, California-based company to disclose its scanning of emails for the purposes of targeted advertising and profile building.In this case, Google is accused of relying on pieces of its code within websites that use its analytics and advertising services to scrape users’ supposedly private browsing history and send copies of it to Google’s servers. Google makes it seem like private browsing mode gives users more control of their data, Amanda Bonn, a lawyer representing users, told Koh. In reality, “Google is saying there’s basically very little you can do to prevent us from collecting your data, and that’s what you should assume we’re doing,” Bonn said.Andrew Schapiro, a lawyer for Google, argued the company’s privacy policy “expressly discloses” its practices. “The data collection at issue is disclosed,” he said.Another lawyer for Google, Stephen Broome, said website owners who contract with the company to use its analytics or other services are well aware of the data collection described in the suit. -Bloomberg
  • Koh isn't buying it - arguing that the company is effectively tricking users under the impression that their information is not being transmitted to the company."I want a declaration from Google on what information they’re collecting on users to the court’s website, and what that’s used for," Koh demanded.The case is Brown v. Google, 20-cv-03664, U.S. District Court, Northern District of California (San Jose), via Bloomberg.
Paul Merrell

Asia Times | Say hello to the Russia-China operating system | Article - 0 views

  • Google cuts Huawei off Android; so Huawei may migrate to Aurora. Call it mobile Eurasia integration; the evolving Russia-China strategic partnership may be on the verge of spawning its own operating system – and that is not a metaphor. Aurora is a mobile operating system currently developed by Russian Open Mobile Platform, based in Moscow. It is based on the Sailfish operating system, designed by Finnish technology company Jolla, which featured a batch of Russians in the development team. Quite a few top coders at Google and Apple also come from the former USSR – exponents of a brilliant scientific academy tradition.
  • No Google? Who cares? Tencent, Xiaomi, Vivo and Oppo are already testing the HongMeng operating system, as part of a batch of one million devices already distributed. HongMeng’s launch is still a closely guarded secret by Huawei, but according to CEO Richard Yu, it could happen even before the end of 2019 for the Chinese market, running on smartphones, computers, TVs and cars. HongMeng is rumored to be 60% faster than Android.
  • Aurora could be regarded as part of Huawei’s fast-evolving Plan B. Huawei is now turbo-charging the development and implementation of its own operating system, HongMeng, a process that started no less than seven years ago. Most of the work on an operating system is writing drivers and APIs (application programming interfaces). Huawei would be able to integrate their code to the Russian system in no time.
  • ...2 more annotations...
  • The HongMeng system may also harbor functions dedicated to security and protection of users’ data. That’s what’s scaring Google the most; Huawei developing a software impenetrable to hacking attempts. Google is actively lobbying the Trump administration to add another reprieve – or even abandon the Huawei ban altogether. By now it’s clear Team Trump has decided to wield a trade war as a geopolitical and geoeconomic weapon. They may have not calculated that other Chinese producers have the power to swing markets. Xiaomi, Oppo and Vivo, for instance, are not (yet) banned in the US market, and combined they sell more than Samsung. They could decide to move to Huawei’s operating system in no time.
  • The existence of Lineage operating system is proof that Huawei is not facing a lot of hurdles developing HongMeng – which will be compatible with all Android apps. There would be no problem to adopt Aurora as well. Huawei will certainly open is own app store to compete with Google Play.
« First ‹ Previous 161 - 174 of 174
Showing 20 items per page