Skip to main content

Home/ Future of the Web/ Group items tagged artificial-intelligence

Rss Feed Group items tagged

Paul Merrell

Cy Vance's Proposal to Backdoor Encrypted Devices Is Riddled With Vulnerabilities | Jus... - 0 views

  • Less than a week after the attacks in Paris — while the public and policymakers were still reeling, and the investigation had barely gotten off the ground — Cy Vance, Manhattan’s District Attorney, released a policy paper calling for legislation requiring companies to provide the government with backdoor access to their smartphones and other mobile devices. This is the first concrete proposal of this type since September 2014, when FBI Director James Comey reignited the “Crypto Wars” in response to Apple’s and Google’s decisions to use default encryption on their smartphones. Though Comey seized on Apple’s and Google’s decisions to encrypt their devices by default, his concerns are primarily related to end-to-end encryption, which protects communications that are in transit. Vance’s proposal, on the other hand, is only concerned with device encryption, which protects data stored on phones. It is still unclear whether encryption played any role in the Paris attacks, though we do know that the attackers were using unencrypted SMS text messages on the night of the attack, and that some of them were even known to intelligence agencies and had previously been under surveillance. But regardless of whether encryption was used at some point during the planning of the attacks, as I lay out below, prohibiting companies from selling encrypted devices would not prevent criminals or terrorists from being able to access unbreakable encryption. Vance’s primary complaint is that Apple’s and Google’s decisions to provide their customers with more secure devices through encryption interferes with criminal investigations. He claims encryption prevents law enforcement from accessing stored data like iMessages, photos and videos, Internet search histories, and third party app data. He makes several arguments to justify his proposal to build backdoors into encrypted smartphones, but none of them hold water.
  • Before addressing the major privacy, security, and implementation concerns that his proposal raises, it is worth noting that while an increase in use of fully encrypted devices could interfere with some law enforcement investigations, it will help prevent far more crimes — especially smartphone theft, and the consequent potential for identity theft. According to Consumer Reports, in 2014 there were more than two million victims of smartphone theft, and nearly two-thirds of all smartphone users either took no steps to secure their phones or their data or failed to implement passcode access for their phones. Default encryption could reduce instances of theft because perpetrators would no longer be able to break into the phone to steal the data.
  • Vance argues that creating a weakness in encryption to allow law enforcement to access data stored on devices does not raise serious concerns for security and privacy, since in order to exploit the vulnerability one would need access to the actual device. He considers this an acceptable risk, claiming it would not be the same as creating a widespread vulnerability in encryption protecting communications in transit (like emails), and that it would be cheap and easy for companies to implement. But Vance seems to be underestimating the risks involved with his plan. It is increasingly important that smartphones and other devices are protected by the strongest encryption possible. Our devices and the apps on them contain astonishing amounts of personal information, so much that an unprecedented level of harm could be caused if a smartphone or device with an exploitable vulnerability is stolen, not least in the forms of identity fraud and credit card theft. We bank on our phones, and have access to credit card payments with services like Apple Pay. Our contact lists are stored on our phones, including phone numbers, emails, social media accounts, and addresses. Passwords are often stored on people’s phones. And phones and apps are often full of personal details about their lives, from food diaries to logs of favorite places to personal photographs. Symantec conducted a study, where the company spread 50 “lost” phones in public to see what people who picked up the phones would do with them. The company found that 95 percent of those people tried to access the phone, and while nearly 90 percent tried to access private information stored on the phone or in other private accounts such as banking services and email, only 50 percent attempted contacting the owner.
  • ...8 more annotations...
  • Vance attempts to downplay this serious risk by asserting that anyone can use the “Find My Phone” or Android Device Manager services that allow owners to delete the data on their phones if stolen. However, this does not stand up to scrutiny. These services are effective only when an owner realizes their phone is missing and can take swift action on another computer or device. This delay ensures some period of vulnerability. Encryption, on the other hand, protects everyone immediately and always. Additionally, Vance argues that it is safer to build backdoors into encrypted devices than it is to do so for encrypted communications in transit. It is true that there is a difference in the threats posed by the two types of encryption backdoors that are being debated. However, some manner of widespread vulnerability will inevitably result from a backdoor to encrypted devices. Indeed, the NSA and GCHQ reportedly hacked into a database to obtain cell phone SIM card encryption keys in order defeat the security protecting users’ communications and activities and to conduct surveillance. Clearly, the reality is that the threat of such a breach, whether from a hacker or a nation state actor, is very real. Even if companies go the extra mile and create a different means of access for every phone, such as a separate access key for each phone, significant vulnerabilities will be created. It would still be possible for a malicious actor to gain access to the database containing those keys, which would enable them to defeat the encryption on any smartphone they took possession of. Additionally, the cost of implementation and maintenance of such a complex system could be high.
  • Privacy is another concern that Vance dismisses too easily. Despite Vance’s arguments otherwise, building backdoors into device encryption undermines privacy. Our government does not impose a similar requirement in any other context. Police can enter homes with warrants, but there is no requirement that people record their conversations and interactions just in case they someday become useful in an investigation. The conversations that we once had through disposable letters and in-person conversations now happen over the Internet and on phones. Just because the medium has changed does not mean our right to privacy has.
  • In addition to his weak reasoning for why it would be feasible to create backdoors to encrypted devices without creating undue security risks or harming privacy, Vance makes several flawed policy-based arguments in favor of his proposal. He argues that criminals benefit from devices that are protected by strong encryption. That may be true, but strong encryption is also a critical tool used by billions of average people around the world every day to protect their transactions, communications, and private information. Lawyers, doctors, and journalists rely on encryption to protect their clients, patients, and sources. Government officials, from the President to the directors of the NSA and FBI, and members of Congress, depend on strong encryption for cybersecurity and data security. There are far more innocent Americans who benefit from strong encryption than there are criminals who exploit it. Encryption is also essential to our economy. Device manufacturers could suffer major economic losses if they are prohibited from competing with foreign manufacturers who offer more secure devices. Encryption also protects major companies from corporate and nation-state espionage. As more daily business activities are done on smartphones and other devices, they may now hold highly proprietary or sensitive information. Those devices could be targeted even more than they are now if all that has to be done to access that information is to steal an employee’s smartphone and exploit a vulnerability the manufacturer was required to create.
  • Vance also suggests that the US would be justified in creating such a requirement since other Western nations are contemplating requiring encryption backdoors as well. Regardless of whether other countries are debating similar proposals, we cannot afford a race to the bottom on cybersecurity. Heads of the intelligence community regularly warn that cybersecurity is the top threat to our national security. Strong encryption is our best defense against cyber threats, and following in the footsteps of other countries by weakening that critical tool would do incalculable harm. Furthermore, even if the US or other countries did implement such a proposal, criminals could gain access to devices with strong encryption through the black market. Thus, only innocent people would be negatively affected, and some of those innocent people might even become criminals simply by trying to protect their privacy by securing their data and devices. Finally, Vance argues that David Kaye, UN Special Rapporteur for Freedom of Expression and Opinion, supported the idea that court-ordered decryption doesn’t violate human rights, provided certain criteria are met, in his report on the topic. However, in the context of Vance’s proposal, this seems to conflate the concepts of court-ordered decryption and of government-mandated encryption backdoors. The Kaye report was unequivocal about the importance of encryption for free speech and human rights. The report concluded that:
  • States should promote strong encryption and anonymity. National laws should recognize that individuals are free to protect the privacy of their digital communications by using encryption technology and tools that allow anonymity online. … States should not restrict encryption and anonymity, which facilitate and often enable the rights to freedom of opinion and expression. Blanket prohibitions fail to be necessary and proportionate. States should avoid all measures that weaken the security that individuals may enjoy online, such as backdoors, weak encryption standards and key escrows. Additionally, the group of intelligence experts that was hand-picked by the President to issue a report and recommendations on surveillance and technology, concluded that: [R]egarding encryption, the U.S. Government should: (1) fully support and not undermine efforts to create encryption standards; (2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and (3) increase the use of encryption and urge US companies to do so, in order to better protect data in transit, at rest, in the cloud, and in other storage.
  • The clear consensus among human rights experts and several high-ranking intelligence experts, including the former directors of the NSA, Office of the Director of National Intelligence, and DHS, is that mandating encryption backdoors is dangerous. Unaddressed Concerns: Preventing Encrypted Devices from Entering the US and the Slippery Slope In addition to the significant faults in Vance’s arguments in favor of his proposal, he fails to address the question of how such a restriction would be effectively implemented. There is no effective mechanism for preventing code from becoming available for download online, even if it is illegal. One critical issue the Vance proposal fails to address is how the government would prevent, or even identify, encrypted smartphones when individuals bring them into the United States. DHS would have to train customs agents to search the contents of every person’s phone in order to identify whether it is encrypted, and then confiscate the phones that are. Legal and policy considerations aside, this kind of policy is, at the very least, impractical. Preventing strong encryption from entering the US is not like preventing guns or drugs from entering the country — encrypted phones aren’t immediately obvious as is contraband. Millions of people use encrypted devices, and tens of millions more devices are shipped to and sold in the US each year.
  • Finally, there is a real concern that if Vance’s proposal were accepted, it would be the first step down a slippery slope. Right now, his proposal only calls for access to smartphones and devices running mobile operating systems. While this policy in and of itself would cover a number of commonplace devices, it may eventually be expanded to cover laptop and desktop computers, as well as communications in transit. The expansion of this kind of policy is even more worrisome when taking into account the speed at which technology evolves and becomes widely adopted. Ten years ago, the iPhone did not even exist. Who is to say what technology will be commonplace in 10 or 20 years that is not even around today. There is a very real question about how far law enforcement will go to gain access to information. Things that once seemed like merely science fiction, such as wearable technology and artificial intelligence that could be implanted in and work with the human nervous system, are now available. If and when there comes a time when our “smart phone” is not really a device at all, but is rather an implant, surely we would not grant law enforcement access to our minds.
  • Policymakers should dismiss Vance’s proposal to prohibit the use of strong encryption to protect our smartphones and devices in order to ensure law enforcement access. Undermining encryption, regardless of whether it is protecting data in transit or at rest, would take us down a dangerous and harmful path. Instead, law enforcement and the intelligence community should be working to alter their skills and tactics in a fast-evolving technological world so that they are not so dependent on information that will increasingly be protected by encryption.
Gonzalo San Gil, PhD.

Call for Papers | thinktwice.com | Creativity, Human Rights, Hacktivism [# Vi... - 0 views

  •  
    "Call for Papers CALL FOR SUBMISSIONS We are looking for session submissions from Pirates, NGOs and Academia to following tracks: (other topics are allowed as well) Creativity: copyrights, patents, collaboration, citizen journalism, media, DRM, open access, FOI, public licensing, policy reform, education, etc… Human Rights: security, data protection, surveillance, FOI, basic income, emigration, voting rights, drones, non-proliferation, dual use technology, encryption, anonymity, transparency, net neutrality, open data, egovernment, society, whistle blowing, political science, etc… Activism|Hacktivism: Future, innovation, liquid democracy, transhumanism, cyborgs, startups, vision, 3d-printing, crowdsourcing, big data, participation, pirate parties, artificial intelligence, globalization, space travel, social networks, freemanning, freehammond, hacktivism, activism, civil disobedience, hacker culture, cyberpunk, cypherpunk, wikileaks, surveillance, digital activism, etc..."
  • ...1 more comment...
  •  
    "Call for Papers CALL FOR SUBMISSIONS We are looking for session submissions from Pirates, NGOs and Academia to following tracks: (other topics are allowed as well) Creativity: copyrights, patents, collaboration, citizen journalism, media, DRM, open access, FOI, public licensing, policy reform, education, etc… Human Rights: security, data protection, surveillance, FOI, basic income, emigration, voting rights, drones, non-proliferation, dual use technology, encryption, anonymity, transparency, net neutrality, open data, egovernment, society, whistle blowing, political science, etc… Activism|Hacktivism: Future, innovation, liquid democracy, transhumanism, cyborgs, startups, vision, 3d-printing, crowdsourcing, big data, participation, pirate parties, artificial intelligence, globalization, space travel, social networks, freemanning, freehammond, hacktivism, activism, civil disobedience, hacker culture, cyberpunk, cypherpunk, wikileaks, surveillance, digital activism, etc..."
  •  
    "Call for Papers CALL FOR SUBMISSIONS We are looking for session submissions from Pirates, NGOs and Academia to following tracks: (other topics are allowed as well) Creativity: copyrights, patents, collaboration, citizen journalism, media, DRM, open access, FOI, public licensing, policy reform, education, etc… Human Rights: security, data protection, surveillance, FOI, basic income, emigration, voting rights, drones, non-proliferation, dual use technology, encryption, anonymity, transparency, net neutrality, open data, egovernment, society, whistle blowing, political science, etc… Activism|Hacktivism: Future, innovation, liquid democracy, transhumanism, cyborgs, startups, vision, 3d-printing, crowdsourcing, big data, participation, pirate parties, artificial intelligence, globalization, space travel, social networks, freemanning, freehammond, hacktivism, activism, civil disobedience, hacker culture, cyberpunk, cypherpunk, wikileaks, surveillance, digital activism, etc..."
  •  
    [# Via FB's Francisco George x Arif Yıldırım] Deadline July 18th 2014 "Call for Papers CALL FOR SUBMISSIONS We are looking for session submissions from Pirates, NGOs and Academia to following tracks: (other topics are allowed as well) Creativity: copyrights, patents, collaboration, citizen journalism, media, DRM, open access, FOI, public licensing, policy reform, education, etc… Human Rights: security, data protection, surveillance, FOI, basic income, emigration, voting rights, drones, non-proliferation, dual use technology, encryption, anonymity, transparency, net neutrality, open data, egovernment, society, whistle blowing, political science, etc… Activism|Hacktivism: Future, innovation, liquid democracy, transhumanism, cyborgs, startups, vision, 3d-printing, crowdsourcing, big data, participation, pirate parties, artificial intelligence, globalization, space travel, social networks, freemanning, freehammond, hacktivism, activism, civil disobedience, hacker culture, cyberpunk, cypherpunk, wikileaks, surveillance, digital activism, etc..."
Paul Merrell

The coming merge of human and machine intelligence - 0 views

  • Now, as the Internet revolution unfolds, we are seeing not merely an extension of mind but a unity of mind and machine, two networks coming together as one. Our smaller brains are in a quest to bypass nature's intent and grow larger by proxy. It is not a stretch of the imagination to believe we will one day have all of the world's information embedded in our minds via the Internet.
  • BCI stands for brain-computer interface, and Jan is one of only a few people on earth using this technology, through two implanted chips attached directly to the neurons in her brain. The first human brain implant was conceived of by John Donoghue, a neuroscientist at Brown University, and implanted in a paralyzed man in 2004. These dime-sized computer chips use a technology called BrainGate that directly connects the mind to computers and the Internet. Having served as chairman of the BrainGate company, I have personally witnessed just how profound this innovation is. BrainGate is an invention that allows people to control electrical devices with nothing but their thoughts. The BrainGate chip is implanted in the brain and attached to connectors outside of the skull, which are hooked up to computers that, in Jan Scheuermann's case, are linked to a robotic arm. As a result, Scheuermann can feed herself chocolate by controlling the robotic arm with nothing but her thoughts.
  • Mind meld But imagine the ways in which the world will change when any of us, disabled or not, can connect our minds to computers.
  • ...2 more annotations...
  • Back in 2004, Google's founders told Playboy magazine that one day we'd have direct access to the Internet through brain implants, with "the entirety of the world's information as just one of our thoughts." A decade later, the road map is taking shape. While it may be years before implants like BrainGate are safe enough to be commonplace—they require brain surgery, after all—there are a host of brainwave sensors in development for use outside of the skull that will be transformational for all of us: caps for measuring driver alertness, headbands for monitoring sleep, helmets for controlling video games. This could lead to wearable EEGs, implantable nanochips or even technology that can listen to our brain signals using the electromagnetic waves that pervade the air we breathe. Just as human intelligence is expanding in the direction of the Internet, the Internet itself promises to get smarter and smarter. In fact, it could prove to be the basis of the machine intelligence that scientists have been racing toward since the 1950s.
  • Neurons may be good analogs for transistors and maybe even computer chips, but they're not good building blocks of intelligence. The neural network is fundamental. The BrainGate technology works because the chip attaches not to a single neuron, but to a network of neurons. Reading the signals of a single neuron would tell us very little; it certainly wouldn't allow BrainGate patients to move a robotic arm or a computer cursor. Scientists may never be able to reverse engineer the neuron, but they are increasingly able to interpret the communication of the network. It is for this reason that the Internet is a better candidate for intelligence than are computers. Computers are perfect calculators composed of perfect transistors; they are like neurons as we once envisioned them. But the Internet has all the quirkiness of the brain: it can work in parallel, it can communicate across broad distances, and it makes mistakes. Even though the Internet is at an early stage in its evolution, it can leverage the brain that nature has given us. The convergence of computer networks and neural networks is the key to creating real intelligence from artificial machines. It took millions of years for humans to gain intelligence, but with the human mind as a guide, it may only take a century to create Internet intelligence.
  •  
    Of course once the human brain is interfaced with the internet, then we will be able to do the Vulcan mind-meld thing. And NSA will be busily crawling the Internet for fresh brain dumps to their data center, which then encompasses the entire former state of Utah. Conventional warfare is a thing of the past as the cyberwar commands of great powers battle for control of the billions of minds making up BrainNet, the internet's successor.  Meanwhile, a hackers' Reaper malware trawls BrainNet for bank account numbers and paswords that it forwards for automated harvesting of personal funds. "Ah, Houston ... we have a problem ..."  
Paul Merrell

The EU's White Paper on AI: A Thoughtful and Balanced Way Forward - Lawfare - 0 views

  • On Feb. 19, the European Commission released a White Paper on Artificial Intelligence outlining its wide-ranging plan to develop artificial intelligence (AI) in Europe. The commission also released a companion European data strategy, aiming to make more data sets available for business and government to promote AI development, along with a report on the safety of AI systems proposing some reforms of the commission’s product liability regime. Initial press reports about the white paper focused on how the commission had stepped back from a proposal in its initial draft for a three- to five-year moratorium on facial recognition technology. But the proposed framework is much more than that: It represents a sensible and thoughtful basis to guide the EU’s consideration of legislation to help direct the development of AI applications, and an important contribution to similar debates going on around the world. The key takeaways are that the EU plans to: Pursue a uniform approach to AI across the EU in order to avoid divergent member state requirements forming barriers to its single market. Take a risk-based, sector-specific approach to regulating AI. Identify in advance high-risk sectors and applications—including facial recognition software. Impose new regulatory requirements and prior assessments to ensure that high-risk AI systems conform to requirements for safety, fairness and data protection before they are released onto the market. Use access to the huge European market as a lever to spread the EU’s approach to AI regulation across the globe.
Paul Merrell

Elon Musk wants brain implants to merge humans with artificial intelligence | Science |... - 0 views

  • Elon Musk and his team of boffins are exploring ways in which they can connect a computer interface to the mind. The South African-born billionaire claims to have already trialled the revolutionary device on a monkey which was able to control the computer with its brain. Mr Musk said at a presentation on Tuesday: “A monkey has been able to control the computer with his brain.”
  • NeuraLink describes the device as “sewing machine-like”. The system implants ultra-thin threads deep into the brain’s nervous system.The company has applied to US regulators in the hopes of beginning trials on humans next year.Primarily, the firm states that initially it wants to help people with severe neurological conditions, but as with all of his companies, Mr Musk is aiming for more and sees humanity’s future as having “superhuman cognition”.The device in question, which is nameless so far, will see the tiny thread fitted with 3,000 electrodes which can monitor the activity of 1,000 neurons.
  • Mr Musk hopes the product will be on the market within four years.
Gary Edwards

Skynet rising: Google acquires 512-qubit quantum computer; NSA surveillance to be turne... - 0 views

  •  
    "The ultimate code breakers" If you know anything about encryption, you probably also realize that quantum computers are the secret KEY to unlocking all encrypted files. As I wrote about last year here on Natural News, once quantum computers go into widespread use by the NSA, the CIA, Google, etc., there will be no more secrets kept from the government. All your files - even encrypted files - will be easily opened and read. Until now, most people believed this day was far away. Quantum computing is an "impractical pipe dream," we've been told by scowling scientists and "flat Earth" computer engineers. "It's not possible to build a 512-qubit quantum computer that actually works," they insisted. Don't tell that to Eric Ladizinsky, co-founder and chief scientist of a company called D-Wave. Because Ladizinsky's team has already built a 512-qubit quantum computer. And they're already selling them to wealthy corporations, too. DARPA, Northrup Grumman and Goldman Sachs In case you're wondering where Ladizinsky came from, he's a former employee of Northrup Grumman Space Technology (yes, a weapons manufacturer) where he ran a multi-million-dollar quantum computing research project for none other than DARPA - the same group working on AI-driven armed assault vehicles and battlefield robots to replace human soldiers. .... When groundbreaking new technology is developed by smart people, it almost immediately gets turned into a weapon. Quantum computing will be no different. This technology grants God-like powers to police state governments that seek to dominate and oppress the People.  ..... Google acquires "Skynet" quantum computers from D-Wave According to an article published in Scientific American, Google and NASA have now teamed up to purchase a 512-qubit quantum computer from D-Wave. The computer is called "D-Wave Two" because it's the second generation of the system. The first system was a 128-qubit computer. Gen two
  •  
    Normally, I'd be suspicious of anything published by Infowars because its editors are willing to publish really over the top stuff, but: [i] this is subject matter I've maintained an interest in over the years and I was aware that working quantum computers were imminent; and [ii] the pedigree on this particular information does not trace to Scientific American, as stated in the article. I've known Scientific American to publish at least one soothing and lengthy article on the subject of chlorinated dioxin hazard -- my specialty as a lawyer was litigating against chemical companies that generated dioxin pollution -- that was generated by known closet chemical industry advocates long since discredited and was totally lacking in scientific validity and contrary to established scientific knowledge. So publication in Scientific American doesn't pack a lot of weight with me. But checking the Scientific American linked article, notes that it was reprinted by permission from Nature, a peer-reviewed scientific journal and news organization that I trust much more. That said, the InfoWars version is a rewrite that contains lots of information not in the Nature/Scientific American version of a sensationalist nature, so heightened caution is still in order. Check the reprinted Nature version before getting too excited: "The D-Wave computer is not a 'universal' computer that can be programmed to tackle any kind of problem. But scientists have found they can usefully frame questions in machine-learning research as optimisation problems. "D-Wave has battled to prove that its computer really operates on a quantum level, and that it is better or faster than a conventional computer. Before striking the latest deal, the prospective customers set a series of tests for the quantum computer. D-Wave hired an outside expert in algorithm-racing, who concluded that the speed of the D-Wave Two was above average overall, and that it was 3,600 times faster than a leading conventional comput
Paul Merrell

Information Warfare: Automated Propaganda and Social Media Bots | Global Research - 0 views

  • NATO has announced that it is launching an “information war” against Russia. The UK publicly announced a battalion of keyboard warriors to spread disinformation. It’s well-documented that the West has long used false propaganda to sway public opinion. Western military and intelligence services manipulate social media to counter criticism of Western policies. Such manipulation includes flooding social media with comments supporting the government and large corporations, using armies of sock puppets, i.e. fake social media identities. See this, this, this, this and this. In 2013, the American Congress repealed the formal ban against the deployment of propaganda against U.S. citizens living on American soil. So there’s even less to constrain propaganda than before.
  • Information warfare for propaganda purposes also includes: The Pentagon, Federal Reserve and other government entities using software to track discussion of political issues … to try to nip dissent in the bud before it goes viral “Controlling, infiltrating, manipulating and warping” online discourse Use of artificial intelligence programs to try to predict how people will react to propaganda
  • Some of the propaganda is spread by software programs. We pointed out 6 years ago that people were writing scripts to censor hard-hitting information from social media. One of America’s top cyber-propagandists – former high-level military information officer Joel Harding – wrote in December: I was in a discussion today about information being used in social media as a possible weapon.  The people I was talking with have a tool which scrapes social media sites, gauges their sentiment and gives the user the opportunity to automatically generate a persuasive response. Their tool is called a “Social Networking Influence Engine”. *** The implications seem to be profound for the information environment. *** The people who own this tool are in the civilian world and don’t even remotely touch the defense sector, so getting approval from the US Department of State might not even occur to them.
  • ...2 more annotations...
  • How Can This Real? Gizmodo reported in 2010: Software developer Nigel Leck got tired rehashing the same 140-character arguments against climate change deniers, so he programmed a bot that does the work for him. With citations! Leck’s bot, @AI_AGW, doesn’t just respond to arguments directed at Leck himself, it goes out and picks fights. Every five minutes it trawls Twitter for terms and phrases that commonly crop up in Tweets that refute human-caused climate change. It then searches its database of hundreds to find a counter-argument best suited for that tweet—usually a quick statement and a link to a scientific source. As can be the case with these sorts of things, many of the deniers don’t know they’ve been targeted by a robot and engage AI_AGW in debate. The bot will continue to fire back canned responses that best fit the interlocutor’s line of debate—Leck says this goes on for days, in some cases—and the bot’s been outfitted with a number of responses on the topic of religion, where the arguments unsurprisingly often end up. Technology has come a long way in the past 5 years. So if a lone programmer could do this 5 years ago, imagine what he could do now. And the big players have a lot more resources at their disposal than a lone climate activist/software developer does.  For example, a government expert told the Washington Post that the government “quite literally can watch your ideas form as you type” (and see this).  So if the lone programmer is doing it, it’s not unreasonable to assume that the big boys are widely doing it.
  • How Effective Are Automated Comments? Unfortunately, this is more effective than you might assume … Specifically, scientists have shown that name-calling and swearing breaks down people’s ability to think rationally … and intentionally sowing discord and posting junk comments to push down insightful comments  are common propaganda techniques. Indeed, an automated program need not even be that sophisticated … it can copy a couple of words from the main post or a comment, and then spew back one or more radioactive labels such as “terrorist”, “commie”, “Russia-lover”, “wimp”, “fascist”, “loser”, “traitor”, “conspiratard”, etc. Given that Harding and his compadres consider anyone who questions any U.S. policies as an enemy of the state  – as does the Obama administration (and see this) – many honest, patriotic writers and commenters may be targeted for automated propaganda comments.
Gonzalo San Gil, PhD.

Linux Creator Linus Torvalds Laughs at the AI Apocalypse - 0 views

  •  
    "Over the past several months, many of the world's most famous scientists and engineers - including Stephen Hawking - have said that one of the biggest threats to humanity is an artificial superintelligence. But Linus Torvalds, the irascible creator of open source operating system Linux, says their fears are idiotic."
  •  
    "Over the past several months, many of the world's most famous scientists and engineers - including Stephen Hawking - have said that one of the biggest threats to humanity is an artificial superintelligence. But Linus Torvalds, the irascible creator of open source operating system Linux, says their fears are idiotic."
Paul Merrell

Assange Keeps Warning Of AI Censorship, And It's Time We Started Listening - 0 views

  • Where power is not overtly totalitarian, wealthy elites have bought up all media, first in print, then radio, then television, and used it to advance narratives that are favorable to their interests. Not until humanity gained widespread access to the internet has our species had the ability to freely and easily share ideas and information on a large scale without regulation by the iron-fisted grip of power. This newfound ability arguably had a direct impact on the election for the most powerful elected office in the most powerful government in the world in 2016, as a leak publishing outlet combined with alternative and social media enabled ordinary Americans to tell one another their own stories about what they thought was going on in their country.This newly democratized narrative-generating power of the masses gave those in power an immense fright, and they’ve been working to restore the old order of power controlling information ever since. And the editor-in-chief of the aforementioned leak publishing outlet, WikiLeaks, has been repeatedly trying to warn us about this coming development.
  • In a statement that was recently read during the “Organising Resistance to Internet Censorship” webinar, sponsored by the World Socialist Web Site, Assange warned of how “digital super states” like Facebook and Google have been working to “re-establish discourse control”, giving authority over how ideas and information are shared back to those in power.Assange went on to say that the manipulative attempts of world power structures to regain control of discourse in the information age has been “operating at a scale, speed, and increasingly at a subtlety, that appears likely to eclipse human counter-measures.”What this means is that using increasingly more advanced forms of artificial intelligence, power structures are becoming more and more capable of controlling the ideas and information that people are able to access and share with one another, hide information which goes against the interests of those power structures and elevate narratives which support those interests, all of course while maintaining the illusion of freedom and lively debate.
  • To be clear, this is already happening. Due to a recent shift in Google’s “evaluation methods”, traffic to left-leaning and anti-establishment websites has plummeted, with sites like WikiLeaks, Alternet, Counterpunch, Global Research, Consortium News, Truthout, and WSWS losing up to 70 percent of the views they were getting prior to the changes. Powerful billionaire oligarchs Pierre Omidyar and George Soros are openly financing the development of “an automated fact-checking system” (AI) to hide “fake news” from the public.
  • ...2 more annotations...
  • To make matters even worse, there’s no way to know the exact extent to which this is going on, because we know that we can absolutely count on the digital super states in question to lie about it. In the lead-up to the 2016 election, Twitter CEO Jack Dorsey was asked point-blank if Twitter was obstructing the #DNCLeaks from trending, a hashtag people were using to build awareness of the DNC emails which had just been published by WikiLeaks, and Dorsey flatly denied it. More than a year later, we learned from a prepared testimony before the Senate Subcommittee on Crime and Terrorism by Twitter’s acting general counsel Sean J. Edgett that this was completely false and Twitter had indeed been doing exactly that to protect the interests of US political structures by sheltering the public from information allegedly gathered by Russian hackers.
  • Imagine going back to a world like the Middle Ages where you only knew the things your king wanted you to know, except you could still watch innocuous kitten videos on Youtube. That appears to be where we may be headed, and if that happens the possibility of any populist movement arising to hold power to account may be effectively locked out from the realm of possibility forever.To claim that these powerful new media corporations are just private companies practicing their freedom to determine what happens on their property is to bury your head in the sand and ignore the extent to which these digital super states are already inextricably interwoven with existing power structures. In a corporatist system of government, which America unquestionably has, corporate censorship is government censorship, of an even more pernicious strain than if Jeff Sessions were touring the country burning books. The more advanced artificial intelligence becomes, the more adept these power structures will become at manipulating us. Time to start paying very close attention to this.
tanyadurham

How Amazon using AI to shell out double money out of your pocket? - 1 views

  •  
    Machine Learning and Artificial Intelligence in Retail and E-Commerce : A Study of its Applications and Implications
Paul Merrell

Microsoft Pitches Technology That Can Read Facial Expressions at Political Rallies - 1 views

  • On the 21st floor of a high-rise hotel in Cleveland, in a room full of political operatives, Microsoft’s Research Division was advertising a technology that could read each facial expression in a massive crowd, analyze the emotions, and report back in real time. “You could use this at a Trump rally,” a sales representative told me. At both the Republican and Democratic conventions, Microsoft sponsored event spaces for the news outlet Politico. Politico, in turn, hosted a series of Microsoft-sponsored discussions about the use of data technology in political campaigns. And throughout Politico’s spaces in both Philadelphia and Cleveland, Microsoft advertised an array of products from “Microsoft Cognitive Services,” its artificial intelligence and cloud computing division. At one exhibit, titled “Realtime Crowd Insights,” a small camera scanned the room, while a monitor displayed the captured image. Every five seconds, a new image would appear with data annotated for each face — an assigned serial number, gender, estimated age, and any emotions detected in the facial expression. When I approached, the machine labeled me “b2ff” and correctly identified me as a 23-year-old male.
  • “Realtime Crowd Insights” is an Application Programming Interface (API), or a software tool that connects web applications to Microsoft’s cloud computing services. Through Microsoft’s emotional analysis API — a component of Realtime Crowd Insights — applications send an image to Microsoft’s servers. Microsoft’s servers then analyze the faces and return emotional profiles for each one. In a November blog post, Microsoft said that the emotional analysis could detect “anger, contempt, fear, disgust, happiness, neutral, sadness or surprise.” Microsoft’s sales representatives told me that political campaigns could use the technology to measure the emotional impact of different talking points — and political scientists could use it to study crowd response at rallies.
  • Facial recognition technology — the identification of faces by name — is already widely used in secret by law enforcement, sports stadiums, retail stores, and even churches, despite being of questionable legality. As early as 2002, facial recognition technology was used at the Super Bowl to cross-reference the 100,000 attendees to a database of the faces of known criminals. The technology is controversial enough that in 2013, Google tried to ban the use of facial recognition apps in its Google glass system. But “Realtime Crowd Insights” is not true facial recognition — it could not identify me by name, only as “b2ff.” It did, however, store enough data on each face that it could continuously identify it with the same serial number, even hours later. The display demonstrated that capability by distinguishing between the number of total faces it had seen, and the number of unique serial numbers. Photo: Alex Emmons
  • ...2 more annotations...
  • Instead, “Realtime Crowd Insights” is an example of facial characterization technology — where computers analyze faces without necessarily identifying them. Facial characterization has many positive applications — it has been tested in the classroom, as a tool for spotting struggling students, and Microsoft has boasted that the tool will even help blind people read the faces around them. But facial characterization can also be used to assemble and store large profiles of information on individuals, even anonymously.
  • Alvaro Bedoya, a professor at Georgetown Law School and expert on privacy and facial recognition, has hailed that code of conduct as evidence that Microsoft is trying to do the right thing. But he pointed out that it leaves a number of questions unanswered — as illustrated in Cleveland and Philadelphia. “It’s interesting that the app being shown at the convention ‘remembered’ the faces of the people who walked by. That would seem to suggest that their faces were being stored and processed without the consent that Microsoft’s policy requires,” Bedoya said. “You have to wonder: What happened to the face templates of the people who walked by that booth? Were they deleted? Or are they still in the system?” Microsoft officials declined to comment on exactly what information is collected on each face and what data is retained or stored, instead referring me to their privacy policy, which does not address the question. Bedoya also pointed out that Microsoft’s marketing did not seem to match the consent policy. “It’s difficult to envision how companies will obtain consent from people in large crowds or rallies.”
  •  
    But nobody is saying that the output of this technology can't be combined with the output of facial recognition technology to let them monitor you individually AND track your emotions. Fortunately, others are fighting back with knowledge and tech to block facial recognition. http://goo.gl/JMQM2W
Paul Merrell

Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy? - Lawfare - 1 views

  • “We are truly fucked.” That was Motherboard’s spot-on reaction to deep fake sex videos (realistic-looking videos that swap a person’s face into sex scenes actually involving other people). And that sleazy application is just the tip of the iceberg. As Julian Sanchez tweeted, “The prospect of any Internet rando being able to swap anyone’s face into porn is incredibly creepy. But my first thought is that we have not even scratched the surface of how bad ‘fake news’ is going to get.” Indeed. Recent events amply demonstrate that false claims—even preposterous ones—can be peddled with unprecedented success today thanks to a combination of social media ubiquity and virality, cognitive biases, filter bubbles, and group polarization. The resulting harms are significant for individuals, businesses, and democracy. Belated recognition of the problem has spurred a variety of efforts to address this most recent illustration of truth decay, and at first blush there seems to be reason for optimism. Alas, the problem may soon take a significant turn for the worse thanks to deep fakes. Get used to hearing that phrase. It refers to digital manipulation of sound, images, or video to impersonate someone or make it appear that a person did something—and to do so in a manner that is increasingly realistic, to the point that the unaided observer cannot detect the fake. Think of it as a destructive variation of the Turing test: imitation designed to mislead and deceive rather than to emulate and iterate.
  • Fueled by artificial intelligence, digital impersonation is on the rise. Machine-learning algorithms (often neural networks) combined with facial-mapping software enable the cheap and easy fabrication of content that hijacks one’s identity—voice, face, body. Deep fake technology inserts individuals’ faces into videos without their permission. The result is “believable videos of people doing and saying things they never did.” Not surprisingly, this concept has been quickly leveraged to sleazy ends. The latest craze is fake sex videos featuring celebrities like Gal Gadot and Emma Watson. Although the sex scenes look realistic, they are not consensual cyber porn. Conscripting individuals (more often women) into fake porn undermines their agency, reduces them to sexual objects, engenders feeling of embarrassment and shame, and inflicts reputational harm that can devastate careers (especially for everyday people). Regrettably, cyber stalkers are sure to use fake sex videos to torment victims. What comes next? We can expect to see deep fakes used in other abusive, individually-targeted ways, such as undermining a rival’s relationship with fake evidence of an affair or an enemy’s career with fake evidence of a racist comment.
Gonzalo San Gil, PhD.

Learn more about deep learning and neural networks | Opensource.com - 0 views

  •  
    "Dig into deep learning in this interview with the authors of the recently released O'Reilly book Deep Learning: A Practitioner's Approach."
Paul Merrell

"In 10 Years, the Surveillance Business Model Will Have Been Made Illegal" - - 1 views

  • The opening panel of the Stigler Center’s annual antitrust conference discussed the source of digital platforms’ power and what, if anything, can be done to address the numerous challenges their ability to shape opinions and outcomes present. 
  • Google CEO Sundar Pichai caused a worldwide sensation earlier this week when he unveiled Duplex, an AI-driven digital assistant able to mimic human speech patterns (complete with vocal tics) to such a convincing degree that it managed to have real conversations with ordinary people without them realizing they were actually talking to a robot.   While Google presented Duplex as an exciting technological breakthrough, others saw something else: a system able to deceive people into believing they were talking to a human being, an ethical red flag (and a surefire way to get to robocall hell). Following the backlash, Google announced on Thursday that the new service will be designed “with disclosure built-in.” Nevertheless, the episode created the impression that ethical concerns were an “after-the-fact consideration” for Google, despite the fierce public scrutiny it and other tech giants faced over the past two months. “Silicon Valley is ethically lost, rudderless and has not learned a thing,” tweeted Zeynep Tufekci, a professor at the University of North Carolina at Chapel Hill and a prominent critic of tech firms.   The controversial demonstration was not the only sign that the global outrage has yet to inspire the profound rethinking critics hoped it would bring to Silicon Valley firms. In Pichai’s speech at Google’s annual I/O developer conference, the ethical concerns regarding the company’s data mining, business model, and political influence were briefly addressed with a general, laconic statement: “The path ahead needs to be navigated carefully and deliberately and we feel a deep sense of responsibility to get this right.”
  • Google’s fellow FAANGs also seem eager to put the “techlash” of the past two years behind them. Facebook, its shares now fully recovered from the Cambridge Analytica scandal, is already charging full-steam ahead into new areas like dating and blockchain.   But the techlash likely isn’t going away soon. The rise of digital platforms has had profound political, economic, and social effects, many of which are only now becoming apparent, and their sheer size and power makes it virtually impossible to exist on the Internet without using their services. As Stratechery’s Ben Thompson noted in the opening panel of the Stigler Center’s annual antitrust conference last month, Google and Facebook—already dominating search and social media and enjoying a duopoly in digital advertising—own many of the world’s top mobile apps. Amazon has more than 100 million Prime members, for whom it is usually the first and last stop for shopping online.   Many of the mechanisms that allowed for this growth are opaque and rooted in manipulation. What are those mechanisms, and how should policymakers and antitrust enforcers address them? These questions, and others, were the focus of the Stigler Center panel, which was moderated by the Economist’s New York bureau chief, Patrick Foulis.
Paul Merrell

Alexa and Siri Can Hear This Hidden Command. You Can't. - The New York Times - 0 views

  • Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online — simply with music playing over the radio.
  • Researchers can now send secret audio instructions undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant.
Paul Merrell

In the Age of AI (full film) | FRONTLINE - YouTube - 0 views

shared by Paul Merrell on 24 Aug 20 - No Cached
  • FRONTLINE PBS | Official FRONTLINE PBS | Official Verified
  • A documentary exploring how artificial intelligence is changing life as we know it — from jobs to privacy to a growing rivalry between the U.S. and China.
  •  
    About 2-hour documentary, excellent.
Paul Merrell

Latest ChatGPT lawsuits highlight backup legal theory against AI platforms | Reuters - 0 views

  • In the plethora of copyright lawsuits against artificial intelligence developers, a pair of complaints filed on Wednesday against OpenAI and related defendants stands out.Unlike most of the authors, artists and news organizations that have sued AI developers, The Intercept Media, opens new tab and Raw Story Media, opens new tab are not alleging straightforward copyright infringement claims. The media companies are instead asserting only that OpenAI and its co-defendants violated the Digital Millennium Copyright Act, or DMCA, deliberately undermining their copyrights by stripping identifying information out of articles used to train the AI system behind the popular chatbot ChatGPT.As my Reuters colleague Blake Brittain reported on Wednesday, the 1998 federal DMCA statute prohibits the removal of information that can help copyright holders detect infringement, including article titles, author names and copyright dates.
1 - 19 of 19
Showing 20 items per page