Google's Relationship With Facts Is Getting Wobblier - The Atlantic - 0 views
-
Misinformation or even disinformation in search results was already a problem before generative AI. Back in 2017, The Outline noted that a snippet once confidently asserted that Barack Obama was the king of America.
-
This is what experts have worried about since ChatGPT first launched: false information confidently presented as fact, without any indication that it could be totally wrong. The problem is “the way things are presented to the user, which is Here’s the answer,” Chirag Shah, a professor of information and computer science at the University of Washington, told me. “You don’t need to follow the sources. We’re just going to give you the snippet that would answer your question. But what if that snippet is taken out of context?”
-
Responding to the notion that Google is incentivized to prevent users from navigating away, he added that “we have no desire to keep people on Google.
- ...15 more annotations...
-
Pandu Nayak, a vice president for search who leads the company’s search-quality teams, told me that snippets are designed to be helpful to the user, to surface relevant and high-caliber results. He argued that they are “usually an invitation to learn more” about a subject
-
“It’s a strange world where these massive companies think they’re just going to slap this generative slop at the top of search results and expect that they’re going to maintain quality of the experience,” Nicholas Diakopoulos, a professor of communication studies and computer science at Northwestern University, told me. “I’ve caught myself starting to read the generative results, and then I stop myself halfway through. I’m like, Wait, Nick. You can’t trust this.”
-
Nayak said the team focuses on the bigger underlying problem, and whether its algorithm can be trained to address it.
-
If Nayak is right, and people do still follow links even when presented with a snippet, anyone who wants to gain clicks or money through search has an incentive to capitalize on that—perhaps even by flooding the zone with AI-written content.
-
Nayak told me that Google plans to fight AI-generated spam as aggressively as it fights regular spam, and claimed that the company keeps about 99 percent of spam out of search results.
-
The Kenya result still pops up on Google, despite viral posts about it. This is a strategic choice, not an error. If a snippet violates Google policy (for example, if it includes hate speech) the company manually intervenes and suppresses it, Nayak said. However, if the snippet is untrue but doesn’t violate any policy or cause harm, the company will not intervene.
-
experts I spoke with had several ideas for how tech companies might mitigate the potential harms of relying on AI in search
-
For starters, tech companies could become more transparent about generative AI. Diakopoulos suggested that they could publish information about the quality of facts provided when people ask questions about important topics
-
They can use a coding technique known as “retrieval-augmented generation,” or RAG, which instructs the bot to cross-check its answer with what is published elsewhere, essentially helping it self-fact-check. (A spokesperson for Google said the company uses similar techniques to improve its output.) They could open up their tools to researchers to stress-test it. Or they could add more human oversight to their outputs, maybe investing in fact-checking efforts.
-
Fact-checking, however, is a fraught proposition. In January, Google’s parent company, Alphabet, laid off roughly 6 percent of its workers, and last month, the company cut at least 40 jobs in its Google News division. This is the team that, in the past, has worked with professional fact-checking organizations to add fact-checks into search results
-
Alex Heath, at The Verge, reported that top leaders were among those laid off, and Google declined to give me more information. It certainly suggests that Google is not investing more in its fact-checking partnerships as it builds its generative-AI tool.
-
Nayak acknowledged how daunting a task human-based fact-checking is for a platform of Google’s extraordinary scale. Fifteen percent of daily searches are ones the search engine hasn’t seen before, Nayak told me. “With this kind of scale and this kind of novelty, there’s no sense in which we can manually curate results.”
-
Creating an infinite, largely automated, and still accurate encyclopedia seems impossible. And yet that seems to be the strategic direction Google is taking.
-
A representative for Google told me that this was an example of a “false premise” search, a type that is known to trip up the algorithm. If she were trying to date me, she argued, she wouldn’t just stop at the AI-generated response given by the search engine, but would click the link to fact-check it.