"A problem arises, though, when that content misleads us. When a purported "symptom" of anxiety is, actually, just a universal, everyday experience. When the information is flawed, or the people providing it are ill-informed. When viewers, many of whom are children and teens, don't realize that a TikTok diagnosis cannot replace treatment by a professional."
"The combination of ChatGPT with its Wolfram plug-in just scored 96% in a UK Maths A-level paper, the exam taken at the end of school, as a crucial metric for university entrance. (That compares to 43% for ChatGPT alone).
If this doesn't shock you, it should. Maths A-level (like its equivalent in many other countries) is held up as the required and essential qualification for much of our populations-the way to be prepared for our upcoming AI age. And yet, here it is, done by those very AIs, better than most of our students."
"The question of why AI generates fake academic papers relates to how large language models work: they are probabilistic, in that they map the probability over sequences of words. As Dr David Smerdon of the University of Queensland puts it: "Given the start of a sentence, it will try to guess the most likely words to come next.""
"But for many scientists, Twitter has become an essential tool for collaboration and discovery - a source of real-time conversations around research papers, conference talks and wider topics in academia. Papers now zip around scientific communities faster thanks to Twitter, says Johann Unger, a linguist at Lancaster University, UK, who notes that extra information is also shared in direct private messages through the site. And its limit on tweet length - currently 280 characters - has pushed academics into keeping their commentary pithy, he adds."
"The music industry is urging streaming platforms not to let artificial intelligence use copyrighted songs for training, in the latest of a run of arguments over intellectual property that threaten to derail the generative AI sector's explosive growth.
In a letter to streamers including Spotify and Apple Music, the record label Universal Music Group expressed fears that AI labs would scrape millions of tracks to use as training data for their models and copycat versions of pop stars."
"Researchers at the Citizen Lab at the University of Toronto's Munk School said the spyware, which is made by an Israeli company called QuaDream, infected some victims' phones by sending an iCloud calendar invitation to mobile users from operators of the spyware, who are likely to be government clients. Victims were not notified of the calendar invitations because they were sent for events logged in the past, making them invisible to the targets of the hacking. Such attacks are known as "zero-click" because users of the mobile phone do not have to click on any malicious link or take any action in order to be infected."
""Bots view, 'like,' subscribe and repost content and manipulate view counts to move content up in search results and recommendation lists," the analysis said. In some cases, Fabrika targets users with disinformation directly after gleaning their emails and phone numbers from databases. The campaign's goals include demoralising Ukrainians and exploiting divisions among western states, the document added.
Experts have downplayed the 1% claim. Alan Woodward, a professor of cybersecurity at Surrey University, said the figure sounded implausible and that sock puppet accounts - a term for accounts with fake identities - need their content to be reposted by plausible accounts such as those operated by influencers."
"A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care.
That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study.
"If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model," Green said. "That personal data could be generated and revealed to somebody else."
She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard."
"Other far more sinister real-world effects of algorithms are well documented. In the US, pedestrians have been mowed down by robotaxis; prisoners denied bail on the advice, in part, of software; in Australia, welfare recipients incorrectly and illegally hounded by an algorithmic debt collector that came to be known as robodebt. In the UK, students took to the streets in 2020 after being denied places at universities by the calculations of digital minions - their chants of "fuck the algorithm" proving a "defining moment" for Kowalkiewicz and an inspiration for his book."
"The research has found many children do not even recognise these promotions, known as content marketing, as advertising. It warns that this may lead to children following betting companies on social media, making it more likely that they sign up with them when they turn 18 and can legally gamble. Dr Raffaello Rossi, lecturer in marketing at Bristol University, one of the report's authors, said content marketing was particularly popular with young people."
"If you were doubting how important recommender systems are to social media companies, a lawsuit filed last week against Meta makes it crystal clear. At the heart of this legal battle is a fundamental question: Shouldn't users have the power to decide what they do and don't see online?
The lawsuit filed by Knight First Amendment Institute at Columbia University on behalf of Professor Ethan Zuckerman directly challenges how social media feeds are curated. Professor Zuckerman's proposed browser extension, 'Unfollow Everything 2.0,' would enable Facebook users to disengage from the algorithmically driven content that dominates their feeds, by allowing them to unfollow friends, pages and groups en masse, thus resetting their digital interactions on their terms."
"Big data and artificial intelligence are some of today's most popular buzzwords. Both are promised to help deliver insights that were previously too complex for computer systems to calculate. With examples ranging from personalised recommendation systems to automatic facial analyses, user-generated data is now analysed by algorithms to identify patterns and predict outcomes. And the common view is that these developments will have a positive impact on society."
"In the meantime, I actually like how most of these islands represent an attempt by education institutions to embrace the weirdness of the web. The current crop of education startups seem bland and antiseptic in comparison to these virtual worlds. I can't take a Coursera class on a pirate ship, or attend office hours in front of an edX campfire.
And honestly, that's probably a good thing. But it makes the web slightly less interesting."
Sounds a bit extreme just to make sure no one can log on to your laptop or smartphone, but a team of researchers from Stanford and Northwestern universities as well as SRI International is nonetheless experimenting at the computer-, cognitive- and neuroscience intersection to combat identity theft and shore up cyber security-by taking advantage of the human brain's innate abilities to learn and recognize patterns.
"Prolific digital rights activism organization Fight for the Future has partnered with the group Students for Sensible Drug Policy to stop facial recognition technology from coming to college campuses. "
"Ah, but isn't creativity a slippery concept - something that's hard to define but that we nevertheless recognise when we see it? That hasn't stopped psychologists from trying to measure it, though, via tools such as the alternative uses test and the similar Torrance test. And it turns out that one LLM - GPT-4 - beats 91% of humans on the former and 99% of them on the latter. So as the inveterate artificial intelligence user Ethan Mollick puts it: "We are running out of creativity tests that AIs cannot ace.""
"No matter how much our computers assure us they're backing everything up to a hard drive in the sky, memory failure remains a hardwired part of our lives. Writers reflect on when a digital loss created an emotional hole - from the college essay that disappeared minutes before the due date to an iPhone update that lost years of photographs."