Skip to main content

Home/ Digit_al Society/ Group items matching "mistakes" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
dr tech

Tall tales - 0 views

  •  
    "Super-charged misinformation and the atrophy of human intelligence. By regurgitating information that is already on the internet, generative models cannot decide what is a good thing to tell a human and will repeat past mistakes made by humans, of which there are plenty."
dr tech

Misinformation, mistakes and the Pope in a puffer: what rapidly evolving AI can - and can't - do | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    "The question of why AI generates fake academic papers relates to how large language models work: they are probabilistic, in that they map the probability over sequences of words. As Dr David Smerdon of the University of Queensland puts it: "Given the start of a sentence, it will try to guess the most likely words to come next.""
dr tech

Photographer admits prize-winning image was AI-generated | Sony world photography awards | The Guardian - 0 views

  •  
    "n a statement on his website, Eldagsen, who studied photography and visual arts at the Art Academy of Mainz, conceptual art and intermedia at the Academy of Fine Arts in Prague, and fine art at the Sarojini Naidu School of Arts and Communication in Hyderabad, said he "applied as a cheeky monkey" to find out if competitions would be prepared for AI images to enter. "They are not," he added. "We, the photo world, need an open discussion," said Eldagsen. "A discussion about what we want to consider photography and what not. Is the umbrella of photography large enough to invite AI images to enter - or would this be a mistake?"
dr tech

Google's AI chatbot Bard makes factual error in first demo - The Verge - 0 views

  •  
    "As Tremblay notes, a major problem for AI chatbots like ChatGPT and Bard is their tendency to confidently state incorrect information as fact. The systems frequently "hallucinate" - that is, make up information - because they are essentially autocomplete systems."
‹ Previous 21 - 24 of 24
Showing 20 items per page