Skip to main content

Home/ Digit_al Society/ Group items tagged mistakes

Rss Feed Group items tagged

dr tech

Facebook blames hate speech ads on mistake, immediately approves more hate speech ads |... - 0 views

  •  
    ""We submitted another two examples of real-life Amharic-language hate speech to them a week later," said Global Witness. "Both ads were, again, accepted by Facebook for publication within a matter of hours.""
dr tech

AI image of Pope Francis in a puffer jacket fooled the internet and experts fear there'... - 0 views

  •  
    "A fake, AI-generated image of Pope Francis stepping out in a stylish white puffer jacket and bejewelled crucifix racked up millions of views over the weekend - with many mistaking it for a real image. Experts fear the rapidly developing technology behind the image could soon undermine our ability to distinguish fake photos, which can be generated in seconds, from reality."
dr tech

Tall tales - 0 views

  •  
    "Super-charged misinformation and the atrophy of human intelligence. By regurgitating information that is already on the internet, generative models cannot decide what is a good thing to tell a human and will repeat past mistakes made by humans, of which there are plenty."
dr tech

Misinformation, mistakes and the Pope in a puffer: what rapidly evolving AI can - and c... - 0 views

  •  
    "The question of why AI generates fake academic papers relates to how large language models work: they are probabilistic, in that they map the probability over sequences of words. As Dr David Smerdon of the University of Queensland puts it: "Given the start of a sentence, it will try to guess the most likely words to come next.""
dr tech

Photographer admits prize-winning image was AI-generated | Sony world photography award... - 0 views

  •  
    "n a statement on his website, Eldagsen, who studied photography and visual arts at the Art Academy of Mainz, conceptual art and intermedia at the Academy of Fine Arts in Prague, and fine art at the Sarojini Naidu School of Arts and Communication in Hyderabad, said he "applied as a cheeky monkey" to find out if competitions would be prepared for AI images to enter. "They are not," he added. "We, the photo world, need an open discussion," said Eldagsen. "A discussion about what we want to consider photography and what not. Is the umbrella of photography large enough to invite AI images to enter - or would this be a mistake?"
dr tech

A new era of lies: Mark Zuckerberg has just ushered in an extinction-level event for tr... - 0 views

  •  
    "Zuckerberg has said that the platform, which has more than 3 billion people worldwide logging on to its apps every day, will be adopting an Elon Musk-style community notes format for policing what is and isn't acceptable speech on its platforms. Starting in the US, the company will be dramatically shifting the Overton window towards whoever can shout the loudest. The Meta CEO all but admitted that the move was politically motivated. "It's time to get back to our roots around free expression," he said, confessing that "restrictions on topics like immigration and gender […] are out of touch with mainstream discourse". He admitted to past "censorship mistakes" - here, probably meaning the past four years of tamping down political speech while a Democratic president was in office - and said he would "work with President Trump to push back against foreign governments going after American companies to censor more"."
dr tech

Study Finds That People Who Entrust Tasks to AI Are Losing Critical Thinking Skills - 0 views

  •  
    "The findings from those examples were striking: overall, those who trusted the accuracy of the AI tools found themselves thinking less critically, while those who trusted the tech less used more critical thought when going back over AI outputs. "The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI," the researchers wrote. "Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving." This isn't enormously surprising. Something we've observed in many domains, from self-driving vehicles to scrutinizing news articles produced by AI, is that humans quickly go on autopilot when they're supposed to be overseeing an automated system, often allowing mistakes to slip past."
dr tech

Audiences Prove that Experts Are Dead Wrong - by Ted Gioia - 0 views

  •  
    ""The rebirth of longform runs counter to everything media experts are peddling. They are all trying to game the algorithm. But they're making a huge mistake….""
dr tech

Google's AI chatbot Bard makes factual error in first demo - The Verge - 0 views

  •  
    "As Tremblay notes, a major problem for AI chatbots like ChatGPT and Bard is their tendency to confidently state incorrect information as fact. The systems frequently "hallucinate" - that is, make up information - because they are essentially autocomplete systems."
‹ Previous 21 - 29 of 29
Showing 20 items per page