"That's why the best systems may come from a combination of AI and human work; we can play to the machine's strengths, Ilievski says. But when we want to compare AI and the human mind, it's important to remember "there is no conclusive research providing evidence that humans and machines approach puzzles in a similar vein", he says. In other words, understanding AI may not give us any direct insight into the mind, or vice versa."
"Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called "AI hallucinations". We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems."
"It seems pointless to ask whether the spreadsheet is a good or a bad thing. But one prominent contrarian, the technology columnist John C Dvorak, had no doubts last week as he contemplated VisiCalc's 30th anniversary.
'The spreadsheet', he fumed, 'created the "what if" society. Instead of moving forward and progressing normally, the what-if society that questions each and every move we make. It second-guesses everything'. Worse still, he thinks, the spreadsheet has elevated the once-lowly bean-counter to the board and enabled accountants to run the world."
"In one activity, my students drafted a paragraph in class, fed their work to ChatGPT with a revision prompt, and then compared the output with their original writing. However, these types of comparative analyses failed because most of my students were not developed enough as writers to analyze the subtleties of meaning or evaluate style. "It makes my writing look fancy," one PhD student protested when I pointed to weaknesses in AI-revised text.
My students also relied heavily on AI-powered paraphrasing tools such as Quillbot. Paraphrasing well, like drafting original research, is a process of deepening understanding. Recent high-profile examples of "duplicative language" are a reminder that paraphrasing is hard work. It is not surprising, then, that many students are tempted by AI-powered paraphrasing tools. These technologies, however, often result in inconsistent writing style, do not always help students avoid plagiarism, and allow the writer to gloss over understanding. Online paraphrasing tools are useful only when students have already developed a deep knowledge of the craft of writing."
"But as a growing body of research shows, these electronic systems are not perfect.
Our new study shows how often these technology-related errors occur and what they mean for patient safety. Often they occur due to programming errors or poor design and are less to do with the health workers using the system."
"A 24-hour news channel startup based in southern California comes with a twist: all of the reporters and production are AI-generated. CBC's Jean-François Bélanger explores what Channel 1 is promising and why some are concerned about what it could mean for the news industry."
"That is, in a way, Orion's most powerful and dangerous feature: they're so normal that people will want to wear them; they're so normal that people won't notice them.
On the other hand, Meta has created a new gadget that, like every other before, can be enhanced or modified for other purposes, for better or worse. Should Meta stop building tech because a small number of people will use it for evil? If they had served this use case on a silver platter then yes, they should be held accountable. They didn't. Sure, Zuck's no friend, but he's not the one sneaking into your privacy."
"TikTok quantified the precise amount of viewing it takes for someone to form a habit: 260 videos.
Kentucky authorities note that while it might seem a lot, TikTok videos can be just a few seconds long.
"Thus, in under 35 minutes, an average user is likely to become addicted to the platform," the state investigators concluded."
"In addition to their usual job of grubbing out bad human edits, they're having to spend an increasing amount of their time trying to weed out AI filler.
404 Media has talked to Ilyas Lebleu, an editor at the crowdsourced encyclopedia who was involved in founding the "WikiProject AI Cleanup" project. The group is trying to come up with best practices to detect machine-generated contributions. (And no, before you ask, AI is useless for this.)"
"Writing in the journal Science, Summerfield and colleagues from Google DeepMind report how they built the "Habermas Machine" - an AI system named after the German philosopher Jürgen Habermas.
The system works by taking written views of individuals within a group and using them to generate a set of group statements designed to be acceptable to all. Group members can then rate these statements, a process that not only trains the system but allows the statement with the greatest endorsement to be selected."
""I'm in shock, there are no words right now. I've been in the [creative] industry for over 20 years and I have never felt so violated and vulnerable," said Mark Torres, a creative director based in London, who appears in the blue shirt in the fake videos.
"I don't want anyone viewing me like that. Just the fact that my image is out there, could be saying anything - promoting military rule in a country I did not know existed. People will think I am involved in the coup," Torres added after being shown the video by the Guardian for the first time."
"First of all, in the short term I agree with arguments that comparative advantage will continue to keep humans relevant and in fact increase their productivity, and may even in some ways level the playing field between humans. As long as AI is only better at 90% of a given job, the other 10% will cause humans to become highly leveraged, increasing compensation and in fact creating a bunch of new human jobs complementing and amplifying what AI is good at, such that the "10%" expands to continue to employ almost everyone. In fact, even if AI can do 100% of things better than humans, but it remains inefficient or expensive at some tasks, or if the resource inputs to humans and AI's are meaningfully different, then the logic of comparative advantage continues to apply. One area humans are likely to maintain a relative (or even absolute) advantage for a significant time is the physical world. Thus, I think that the human economy may continue to make sense even a little past the point where we reach "a country of geniuses in a datacenter"."
"The mother of a teenager who killed himself after becoming obsessed with an artificial intelligence-powered chatbot now accuses its maker of complicity in his death.
Megan Garcia filed a civil suit against Character.ai, which makes a customizable chatbot for role-playing, in Florida federal court on Wednesday, alleging negligence, wrongful death and deceptive trade practices. Her son Sewell Setzer III, 14, died in Orlando, Florida, in February. In the months leading up to his death, Setzer used the chatbot day and night, according to Garcia."
"But what is pitched as a more convenient way of looking up information online has prompted scrutiny over how and where these chatbots select the information they provide. Looking into the sort of evidence that large language models (LLMs, the engines on which chatbots are built) find most convincing, three computer science researchers from the University of California, Berkeley, found current chatbots overrely on the superficial relevance of information. They tend to prioritise text that includes pertinent technical language or is stuffed with related keywords, while ignoring other features we would usually use to assess trustworthiness, such as the inclusion of scientific references or objective language free of personal bias."
"AI may displace 3m jobs but long-term losses 'relatively modest', says Tony Blair's thinktank
Rise in unemployment in low hundreds of thousands as technology creates roles, Tony Blair Institute suggests"