"Or, as the New York Times summarised it, "doctors who were given ChatGPT-4 along with conventional resources did only slightly better than doctors who did not have access to the bot. And, to the researchers' surprise, ChatGPT alone outperformed the doctors."
More interesting, though, were two other revelations: the experiment demonstrated doctors' sometimes unwavering belief in a diagnosis they had made, even when ChatGPT suggested a better one; and it also suggested that at least some of the physicians didn't really know how best to exploit the tool's capabilities. Which in turn revealed what AI advocates such as Ethan Mollick have been saying for aeons: that effective "prompt engineering" - knowing what to ask an LLM to get the most out of it - is a subtle and poorly understood art."
"The company, Spines, will charge authors between $1,200 and $5,000 to have their books edited, proofread, formatted, designed and distributed with the help of AI.
Independent publisher Canongate said "these dingbats … don't care about writing or books", in a Bluesky post. Spines is charging "hopeful would-be authors to automate the process of flinging their book out into the world, with the least possible attention, care or craft".
"These aren't people who care about books or reading or anything remotely related," said author Suyi Davies Okungbowa, whose most recent book is Lost Ark Dreaming, in a post on Bluesky. "These are opportunists and extractive capitalists.""
""AI lowers the barriers to entry for creating content. You don't need coding skills or anything like that to generate these images. It is also symptomatic of far-right views going mainstream or being normalised," he said, adding that the far right appeared to have fewer moral concerns about AI imagery.
Allchorn said more established political parties appeared warier of using AI in official campaigns: "Mainstream actors still have ethical concerns about the effectiveness, authenticity and reliability of these models that far-right or extremist actors are not beholden to.""
""People glamourise them types of things and the smallest thing can be escalated on social media," he said. "A fight can happen between two people and they can squash it [reach a truce], but because the video's out there on social media and it looks from a different perspective like one is losing, pride is going to be hurt so you might go out there and get some sort of revenge and let people know, you're not going to mess with me."
It all created anxiety, explained St Clair-Hughes.
"The fearmongering on social media puts you in a fight or flight state so when you leave the house now you are either on the front foot or on the back foot. So you step outside ready to do whatever you need to do … It's the subliminals - no one's telling you to pick up a knife and commit violence, it's just the more that you see it …""
"Imagine sitting down with an AI model for a spoken two-hour interview. A friendly voice guides you through a conversation that ranges from your childhood, your formative memories, and your career to your thoughts on immigration policy. Not long after, a virtual replica of you is able to embody your values and preferences with stunning accuracy.
That's now possible, according to a new paper from a team including researchers from Stanford and Google DeepMind, which has been published on arXiv and has not yet been peer-reviewed."
"The breakdown in negotiations resulted in Meta blocking all news sources on Facebook in Canada "recklessly and dangerously" as all 10 provinces and three territories in the country burned, Canada's heritage minister, Pascale St-Onge, told Guardian Australia.
"Facebook is leaving disinformation and misinformation to spread on their platform, while choosing to block access to reliable, high-quality, independent journalism," St-Onge said.
"Facebook is just leaving more room for misinformation during need-to-know situations like wildfires, emergencies, local elections and other critical times for people to make decisions on matters that affect them.""
"The climate crisis could prove AI's greatest challenge. While Google publicises AI-driven advances in flooding, wildfire and heatwave forecasts, like many big tech companies, it uses more energy than many countries. Today's large models are a major culprit. It can take 10 gigawatt-hours of power to train a single large language model like OpenAI's ChatGPT, enough to supply 1,000 US homes for a year."
"The alleged cybercriminals are thought to have carefully planned out an elaborate and hyper-targeted phishing scam that went after employees of large companies like MGM and Twilio. In fact, Scattered Spider's breach at MGM, which involved a phone call to the company's help desk, resulted in a temporary shut down of the company's hotel and casino operations, costing the company $100 million.
The Scattered Spider plan of attack involved sending text messages to employees at the targeted companies while pretending to be part of their employer's IT department. The texts urged the employees to login to a link provided in the text message, otherwise, the text message claimed, their employee accounts would be deactivated."
"MPs are to summon Elon Musk to testify about X's role in spreading disinformation, in a parliamentary inquiry into the UK riots and the rise of false and harmful AI content, the Guardian has learned.
Senior executives from Meta, which runs Facebook and Instagram, and TikTok are also expected to be called for questioning as part of a Commons science and technology select committee social media inquiry."
"Roblox to give parents more control over children's activity after warnings over grooming
Parents will be able to see who children interact with and ensure they cannot play games with graphic violence as report accuses company of lax safety controls"
"A robot, trained for the first time by watching videos of seasoned surgeons, executed the same surgical procedures as skillfully as the human doctors.
The successful use of imitation learning to train surgical robots eliminates the need to program robots with each individual move required during a medical procedure and brings the field of robotic surgery closer to true autonomy, where robots could perform complex surgeries without human help."
"AI may displace 3m jobs but long-term losses 'relatively modest', says Tony Blair's thinktank
Rise in unemployment in low hundreds of thousands as technology creates roles, Tony Blair Institute suggests"
"But what is pitched as a more convenient way of looking up information online has prompted scrutiny over how and where these chatbots select the information they provide. Looking into the sort of evidence that large language models (LLMs, the engines on which chatbots are built) find most convincing, three computer science researchers from the University of California, Berkeley, found current chatbots overrely on the superficial relevance of information. They tend to prioritise text that includes pertinent technical language or is stuffed with related keywords, while ignoring other features we would usually use to assess trustworthiness, such as the inclusion of scientific references or objective language free of personal bias."
"The mother of a teenager who killed himself after becoming obsessed with an artificial intelligence-powered chatbot now accuses its maker of complicity in his death.
Megan Garcia filed a civil suit against Character.ai, which makes a customizable chatbot for role-playing, in Florida federal court on Wednesday, alleging negligence, wrongful death and deceptive trade practices. Her son Sewell Setzer III, 14, died in Orlando, Florida, in February. In the months leading up to his death, Setzer used the chatbot day and night, according to Garcia."
"First of all, in the short term I agree with arguments that comparative advantage will continue to keep humans relevant and in fact increase their productivity, and may even in some ways level the playing field between humans. As long as AI is only better at 90% of a given job, the other 10% will cause humans to become highly leveraged, increasing compensation and in fact creating a bunch of new human jobs complementing and amplifying what AI is good at, such that the "10%" expands to continue to employ almost everyone. In fact, even if AI can do 100% of things better than humans, but it remains inefficient or expensive at some tasks, or if the resource inputs to humans and AI's are meaningfully different, then the logic of comparative advantage continues to apply. One area humans are likely to maintain a relative (or even absolute) advantage for a significant time is the physical world. Thus, I think that the human economy may continue to make sense even a little past the point where we reach "a country of geniuses in a datacenter"."
""I'm in shock, there are no words right now. I've been in the [creative] industry for over 20 years and I have never felt so violated and vulnerable," said Mark Torres, a creative director based in London, who appears in the blue shirt in the fake videos.
"I don't want anyone viewing me like that. Just the fact that my image is out there, could be saying anything - promoting military rule in a country I did not know existed. People will think I am involved in the coup," Torres added after being shown the video by the Guardian for the first time."