"Google should know better, given that it already had a "hallucination problem" with its featured snippets(Opens in a new tab) at the top of search results back in 2017. The snippets algorithm seemed to particularly enjoy telling lies about U.S. presidents. Again, what could go wrong?"
"As Tremblay notes, a major problem for AI chatbots like ChatGPT and Bard is their tendency to confidently state incorrect information as fact. The systems frequently "hallucinate" - that is, make up information - because they are essentially autocomplete systems."
"Shapiro said that using ChatGPT can be "great" in helping applicants "brainstorm verbs" and reframe language that can "bring a level of polish to their applications." At the same time, she said that submitting AI-generated materials along with job applications can backfire if applicants don't review them for accuracy.
Shapiro said Jasper recruiters have interviewed candidates and discovered skills on their résumés that applicants said shouldn't be there or characterizations they weren't familiar with. Checking the AI-generated materials to ensure they accurately reflect an applicant's capabilities, she said, is critical if they're using ChatGPT - especially if the applicant gets hired."
"Super-charged misinformation and the atrophy of human intelligence. By regurgitating information that is already on the internet, generative models cannot decide what is a good thing to tell a human and will repeat past mistakes made by humans, of which there are plenty."