Skip to main content

Home/ Digit_al Society/ Group items tagged hallucination

Rss Feed Group items tagged

dr tech

Microsoft and Google launched AI search too soon | Mashable - 0 views

  •  
    "Google should know better, given that it already had a "hallucination problem" with its featured snippets(Opens in a new tab) at the top of search results back in 2017. The snippets algorithm seemed to particularly enjoy telling lies about U.S. presidents. Again, what could go wrong?"
dr tech

Google's AI chatbot Bard makes factual error in first demo - The Verge - 0 views

  •  
    "As Tremblay notes, a major problem for AI chatbots like ChatGPT and Bard is their tendency to confidently state incorrect information as fact. The systems frequently "hallucinate" - that is, make up information - because they are essentially autocomplete systems."
dr tech

Cory Doctorow: What Kind of Bubble is AI? - Locus Online - 0 views

  •  
    "Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical. AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments. In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate. But that's not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is "companies will buy our products so they can do more with less." It's not "business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.""
dr tech

Lawyer in Huge Trouble After He Used ChatGPT in Court and It Totally Screwed Up - 0 views

  •  
    "Schwartz told the court that he "greatly regrets" using ChatGPT to do his research for the case "and will never do so in the future without absolute verification of its authenticity." Judge Castel, however, doesn't seem swayed, and in his May 4 order he in no uncertain terms described the gravity of the situation. "The Court is presented with an unprecedented circumstance," reads the judge's order for a future hearing. "A submission filed by plaintiff's counsel in opposition to a motion to dismiss is replete with citations to non-existent cases... six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.""
dr tech

Don't Expect ChatGPT to Help You Land Your Next Job - 0 views

  •  
    "Shapiro said that using ChatGPT can be "great" in helping applicants "brainstorm verbs" and reframe language that can "bring a level of polish to their applications." At the same time, she said that submitting AI-generated materials along with job applications can backfire if applicants don't review them for accuracy. Shapiro said Jasper recruiters have interviewed candidates and discovered skills on their résumés that applicants said shouldn't be there or characterizations they weren't familiar with. Checking the AI-generated materials to ensure they accurately reflect an applicant's capabilities, she said, is critical if they're using ChatGPT - especially if the applicant gets hired."
dr tech

Tall tales - 0 views

  •  
    "Super-charged misinformation and the atrophy of human intelligence. By regurgitating information that is already on the internet, generative models cannot decide what is a good thing to tell a human and will repeat past mistakes made by humans, of which there are plenty."
dr tech

ChatGPT freaked out, generating gibberish for many users - Tech - 0 views

  •  
    "Actually, ChatGPT was freaking out in many ways yesterday, but one recurring theme was that it would be prompted with a normal question - typically something involving the tech business or the user's job - and respond with something flowery to the point of unintelligibility. For instance, according to an X post by architect Sean McGuire, the chatbot advised him at one point to ensure that "sesquipedalian safes are cross-keyed and the consul's cry from the crow's nest is met by beatine and wary hares a'twist and at winch in the willow.""
1 - 7 of 7
Showing 20 items per page