"The mother of a teenager who killed himself after becoming obsessed with an artificial intelligence-powered chatbot now accuses its maker of complicity in his death.
Megan Garcia filed a civil suit against Character.ai, which makes a customizable chatbot for role-playing, in Florida federal court on Wednesday, alleging negligence, wrongful death and deceptive trade practices. Her son Sewell Setzer III, 14, died in Orlando, Florida, in February. In the months leading up to his death, Setzer used the chatbot day and night, according to Garcia."
"First of all, in the short term I agree with arguments that comparative advantage will continue to keep humans relevant and in fact increase their productivity, and may even in some ways level the playing field between humans. As long as AI is only better at 90% of a given job, the other 10% will cause humans to become highly leveraged, increasing compensation and in fact creating a bunch of new human jobs complementing and amplifying what AI is good at, such that the "10%" expands to continue to employ almost everyone. In fact, even if AI can do 100% of things better than humans, but it remains inefficient or expensive at some tasks, or if the resource inputs to humans and AI's are meaningfully different, then the logic of comparative advantage continues to apply. One area humans are likely to maintain a relative (or even absolute) advantage for a significant time is the physical world. Thus, I think that the human economy may continue to make sense even a little past the point where we reach "a country of geniuses in a datacenter"."
""I'm in shock, there are no words right now. I've been in the [creative] industry for over 20 years and I have never felt so violated and vulnerable," said Mark Torres, a creative director based in London, who appears in the blue shirt in the fake videos.
"I don't want anyone viewing me like that. Just the fact that my image is out there, could be saying anything - promoting military rule in a country I did not know existed. People will think I am involved in the coup," Torres added after being shown the video by the Guardian for the first time."
"Writing in the journal Science, Summerfield and colleagues from Google DeepMind report how they built the "Habermas Machine" - an AI system named after the German philosopher Jürgen Habermas.
The system works by taking written views of individuals within a group and using them to generate a set of group statements designed to be acceptable to all. Group members can then rate these statements, a process that not only trains the system but allows the statement with the greatest endorsement to be selected."
"In addition to their usual job of grubbing out bad human edits, they're having to spend an increasing amount of their time trying to weed out AI filler.
404 Media has talked to Ilyas Lebleu, an editor at the crowdsourced encyclopedia who was involved in founding the "WikiProject AI Cleanup" project. The group is trying to come up with best practices to detect machine-generated contributions. (And no, before you ask, AI is useless for this.)"
"TikTok quantified the precise amount of viewing it takes for someone to form a habit: 260 videos.
Kentucky authorities note that while it might seem a lot, TikTok videos can be just a few seconds long.
"Thus, in under 35 minutes, an average user is likely to become addicted to the platform," the state investigators concluded."
"That is, in a way, Orion's most powerful and dangerous feature: they're so normal that people will want to wear them; they're so normal that people won't notice them.
On the other hand, Meta has created a new gadget that, like every other before, can be enhanced or modified for other purposes, for better or worse. Should Meta stop building tech because a small number of people will use it for evil? If they had served this use case on a silver platter then yes, they should be held accountable. They didn't. Sure, Zuck's no friend, but he's not the one sneaking into your privacy."
"A 24-hour news channel startup based in southern California comes with a twist: all of the reporters and production are AI-generated. CBC's Jean-François Bélanger explores what Channel 1 is promising and why some are concerned about what it could mean for the news industry."
"But as a growing body of research shows, these electronic systems are not perfect.
Our new study shows how often these technology-related errors occur and what they mean for patient safety. Often they occur due to programming errors or poor design and are less to do with the health workers using the system."
"In one activity, my students drafted a paragraph in class, fed their work to ChatGPT with a revision prompt, and then compared the output with their original writing. However, these types of comparative analyses failed because most of my students were not developed enough as writers to analyze the subtleties of meaning or evaluate style. "It makes my writing look fancy," one PhD student protested when I pointed to weaknesses in AI-revised text.
My students also relied heavily on AI-powered paraphrasing tools such as Quillbot. Paraphrasing well, like drafting original research, is a process of deepening understanding. Recent high-profile examples of "duplicative language" are a reminder that paraphrasing is hard work. It is not surprising, then, that many students are tempted by AI-powered paraphrasing tools. These technologies, however, often result in inconsistent writing style, do not always help students avoid plagiarism, and allow the writer to gloss over understanding. Online paraphrasing tools are useful only when students have already developed a deep knowledge of the craft of writing."
"It seems pointless to ask whether the spreadsheet is a good or a bad thing. But one prominent contrarian, the technology columnist John C Dvorak, had no doubts last week as he contemplated VisiCalc's 30th anniversary.
'The spreadsheet', he fumed, 'created the "what if" society. Instead of moving forward and progressing normally, the what-if society that questions each and every move we make. It second-guesses everything'. Worse still, he thinks, the spreadsheet has elevated the once-lowly bean-counter to the board and enabled accountants to run the world."
"Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called "AI hallucinations". We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems."
"That's why the best systems may come from a combination of AI and human work; we can play to the machine's strengths, Ilievski says. But when we want to compare AI and the human mind, it's important to remember "there is no conclusive research providing evidence that humans and machines approach puzzles in a similar vein", he says. In other words, understanding AI may not give us any direct insight into the mind, or vice versa."
"As we have seen with other technologies, there will also be downsides, and we need to start working now to maximize AI's benefits while minimizing its harms. As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we'll run out of things to do (even if they don't look like "real jobs" to us today). People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before. As a society, we will be back in an expanding world, and we can again focus on playing positive-sum games.
Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable."
"On a zoomed-out time scale, technological progress follows an exponential curve. Compare how the world looked 15 years ago (no smartphones, really), 150 years ago (no combustion engine, no home electricity), 1,500 years ago (no industrial machines), and 15,000 years ago (no agriculture).
The coming change will center around the most impressive of our capabilities: the phenomenal ability to think, create, understand, and reason. To the three great technological revolutions-the agricultural, the industrial, and the computational-we will add a fourth: the AI revolution. This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly.
The technological progress we make in the next 100 years will be far larger than all we've made since we first controlled fire and invented the wheel. We have already built AI systems that can learn and do useful things. They are still primitive, but the trendlines are clear."
"One of the key features introduced is a "Correction" capability in Azure AI Content Safety. This tool aims to address the problem of AI hallucinations - instances where AI models generate false or misleading information. "When we detect there's a mismatch between the grounding context and the response… we give that information back to the AI system," Bird explained. "With that additional information, it's usually able to do better the second try.""
"To find more glyphs, researchers led by archaeologist Masato Sakai of Yamagata University trained an AI program to identify relief-type glyphs in high-resolution drone images taken of the entire region. The program identified 1309 possible geoglyphs, and the team confirmed 303 of them with on-the-ground surveys, almost doubling the number of known geoglyphs of this type, the researchers report today in the Proceedings of the National Academy of Sciences."