Opinion | Chatbots Are a Danger to Democracy - The New York Times - 0 views
-
longer-term threats to democracy that are waiting around the corner. Perhaps the most serious is political artificial intelligence in the form of automated “chatbots,” which masquerade as humans and try to hijack the political process
-
Increasingly, they take the form of machine learning systems that are not painstakingly “taught” vocabulary, grammar and syntax but rather “learn” to respond appropriately using probabilistic inference from large data sets, together with some human guidance.
-
In the buildup to the midterms, for instance, an estimated 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots.
- ...21 more annotations...
-
In the days following the disappearance of the columnist Jamal Khashoggi, Arabic-language social media erupted in support for Crown Prince Mohammed bin Salman, who was widely rumored to have ordered his murder. On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets. “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” In all likelihood, the majority of these messages were generated by chatbots.
-
around a fifth of all tweets discussing the 2016 presidential election are believed to have been the work of chatbots.
-
a third of all traffic on Twitter before the 2016 referendum on Britain’s membership in the European Union was said to come from chatbots, principally in support of the Leave side.
-
It’s irrelevant that current bots are not “smart” like we are, or that they have not achieved the consciousness and creativity hoped for by A.I. purists. What matters is their impact
-
In the past, despite our differences, we could at least take for granted that all participants in the political process were human beings. This no longer true
-
Increasingly we share the online debate chamber with nonhuman entities that are rapidly growing more advanced
-
a bot developed by the British firm Babylon reportedly achieved a score of 81 percent in the clinical examination for admission to the Royal College of General Practitioners. The average score for human doctors? 72 percent.
-
If chatbots are approaching the stage where they can answer diagnostic questions as well or better than human doctors, then it’s possible they might eventually reach or surpass our levels of political sophistication
-
They’ll likely have faces and voices, names and personalities — all engineered for maximum persuasion. So-called “deep fake” videos can already convincingly synthesize the speech and appearance of real politicians.
-
The most obvious risk is that we are crowded out of our own deliberative processes by systems that are too fast and too ubiquitous for us to keep up with.
-
in a world where, increasingly, the only feasible way of engaging in debate with chatbots is through the deployment of other chatbots also possessed of the same speed and facility, the worry is that in the long run we’ll become effectively excluded from our own party.
-
A blunt approach — call it disqualification — would be an all-out prohibition of bots on forums where important political speech takes place, and punishment for the humans responsible
-
would amend the Federal Election Campaign Act of 1971 to prohibit candidates and political parties from using any bots intended to impersonate or replicate human activity for public communication. It would also stop PACs, corporations and labor organizations from using bots to disseminate messages advocating candidates, which would be considered “electioneering communications.”
-
A subtler method would involve mandatory identification: requiring all chatbots to be publicly registered and to state at all times the fact that they are chatbots, and the identity of their human owners and controllers.
-
We should also be exploring more imaginative forms of regulation. Why not introduce a rule, coded into platforms themselves, that bots may make only up to a specific number of online contributions per day, or a specific number of responses to a particular human?
-
We need not treat the speech of chatbots with the same reverence that we treat human speech. Moreover, bots are too fast and tricky to be subject to ordinary rules of debate
-
the methods we use to regulate bots must be more robust than those we apply to people. There can be no half-measures when democracy is at stake.