Artificial intelligence is ripe for abuse, tech executive warns: 'a fascist's dream' | ... - 0 views
-
“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.
-
All of these movements have shared characteristics, including the desire to centralize power, track populations, demonize outsiders and claim authority and neutrality without being accountable. Machine intelligence can be a powerful part of the power playbook, she said.
-
“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.
- ...9 more annotations...
-
Another area where AI can be misused is in building registries, which can then be used to target certain population groups. Crawford noted historical cases of registry abuse, including IBM’s role in enabling Nazi Germany to track Jewish, Roma and other ethnic groups with the Hollerith Machine, and the Book of Life used in South Africa during apartheid.
-
Donald Trump has floated the idea of creating a Muslim registry. “We already have that. Facebook has become the default Muslim registry of the world,
-
Crawford was concerned about the potential use of AI in predictive policing systems, which already gather the kind of data necessary to train an AI system. Such systems are flawed, as shown by a Rand Corporation study of Chicago’s program. The predictive policing did not reduce crime, but did increase harassment of people in “hotspot” areas
-
research from Cambridge University that showed it is possible to predict people’s religious beliefs based on what they “like” on the social network. Christians and Muslims were correctly classified in 82% of cases, and similar results were achieved for Democrats and Republicans (85%). That study was concluded in 2013,
-
Another worry related to the manipulation of political beliefs or shifting voters, something Facebook and Cambridge Analytica claim they can already do. Crawford was skeptical about giving Cambridge Analytica credit for Brexit and the election of Donald Trump, but thinks what the firm promises – using thousands of data points on people to work out how to manipulate their views – will be possible “in the next few years”.
-
Such black box systems are starting to creep into government. Palantir is building an intelligence system to assist Donald Trump in deporting immigrants.
-
Crawford argues that we have to make these AI systems more transparent and accountable. “The ocean of data is so big. We have to map their complex subterranean and unintended effects.”
-
Crawford has founded AI Now, a research community focused on the social impacts of artificial intelligence to do just this “We want to make these systems as ethical as possible and free from unseen biases.”