"Our reliance on their information is now a matter of life and death. Misinformation online has real-world consequences. In the developing world, a hoax claimed India had banned coronavirus social posts, and in the US, 13 percent of people thought coronavirus was a hoax during the critical weeks where earlier notification to shelter-in-place would have saved thousands of lives. Tech platforms have an ability through their persuasive techniques and microtargeting to influence the behavior of society in ways traditional media can't. "
"Tristan Harris presents on 1) why humans as a species are vulnerable to technology, 2) why it's so hard to solve the issues of social media algorithms, artificial intelligence, and exponential tech, and 3) what it will take to come together to avoid these existential threats."
"What if facial recognition technology isn't as good at recognising faces as it has sometimes been claimed to be? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams' story in "Facing up to the problems of recognising faces")."
"Next year's elections in Britain and the US could be marked by a wave of AI-powered disinformation, experts have warned, as generated images, text and deepfake videos go viral at the behest of swarms of AI-powered propaganda bots.
Sam Altman, CEO of the ChatGPT creator, OpenAI, told a congressional hearing in Washington this week that the models behind the latest generation of AI technology could manipulate users."
"His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will "not be able to know what is true anymore."
He is also worried that AI technologies will in time upend the job market. Today, chatbots such as ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. "It takes away the drudge work," he said. "It might take away more than that."
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow AI systems not only to generate their own computer code but actually to run that code on their own. And he fears a day when truly autonomous weapons - those killer robots - become reality."
"Campaigners fear the face-scanning technology could be used against protesters, and that police have done so before.
The Met insisted the technology would not be used to quell lawful protest or target activists. But campaign groups do not believe them. Britain's biggest force said: "It is not used to identify people who are linked to, or have been convicted of, being involved in protest activity."
A leading academic expert said the number of people whose faces would be scanned would make it the largest deployment yet of live facial recognition (LFR) in the UK."
"The more technology helps make us more efficient, the more we are asked to be more efficient. We - our labour, our time, our data - is mined with increasing rapaciousness.
Here's my thing with that Keynes essay. Sure, it looks like he was totally wrong about the future. We didn't end up with so much free time that we all went insane. But, then again, we've never actually tested his theory properly. We never just let the machines take over. Clearly, as we're (re)discovering, everyone finds that idea terrifying. I tend to agree. The idea of a completely A.I.-controlled world makes me uneasy. That said, the trend over the last 100 years - and even more since the dawn of this century - doesn't make me feel much better.
What seems likelier to me than us all losing our jobs to A.I. is that the way in which we're already being replaced by machines continues is accelerated. That is, that we become ever more tied to the machines, ever more entwined with them. That our lives, bodies, and brains will become ever more machine-like."
"Their tools analyze content using sophisticated algorithms, picking up on subtle signals to distinguish the images made with computers from the ones produced by human photographers and artists. But some tech leaders and misinformation experts have expressed concern that advances in A.I. will always stay a step ahead of the tools.
To assess the effectiveness of current A.I.-detection technology, The New York Times tested five new services using more than 100 synthetic images and real photos. The results show that the services are advancing rapidly, but at times fall short."
"The European Parliament secured a ban on use of real-time surveillance and biometric technologies including emotional recognition but with three exceptions, according to Breton.
It would mean police would be able to use the invasive technologies only in the event of an unexpected threat of a terrorist attack, the need to search for victims and in the prosecution of serious crime."
""If we build highly advanced autonomous AI, we risk creating systems that autonomously pursue undesirable goals", adding that "we may not be able to keep them in check".
Other policy recommendations in the document include: mandatory reporting of incidents where models show alarming behaviour; putting in place measures to stop dangerous models from replicating themselves; and giving regulators the power to pause development of AI models showing dangerous behaviours."
"However, Ms Bower also noted Woolworths' AI technology is considerably less invasive than technology recently trialled and abandoned by Bunnings and Kmart. "The Woolworths cameras don't collect sensitive biometric data or any personal information," she said. "Woolworths has also taken steps to keep customers informed using a combination of in-store signage and public statements. Importantly, customers can opt-out by using the traditional checkout process. These are all consumer protections Bunnings and Kmart failed to implement.""
"But it feels less creepy once you learn that these technologies don't have to rely on a camera to see where you are and what you're doing. Instead, they use radar. Google's Advanced Technology and Products division-better known as ATAP, the department behind oddball projects such as a touch-sensitive denim jacket-has spent the past year exploring how computers can use radar to understand our needs or intentions and then react to us appropriately."
"In combining these factors, we arrive at a civilization built upon a technological infrastructure that we fundamentally cannot understand. The same systems that promise us technological emancipation put the whole of society at risk. I vaguely recall a wise man once saying that only the fool builds his house upon sand. And so, how can a society maintain itself if the stones of its foundation are black boxes? Before we answer this question, let's examine the current state of affairs."
"The public is often supportive of the use of such tech: 59% of U.K. adults told a survey they "somewhat" or "strongly" support police use of facial recognition technology in public spaces, and a Pew Research study found 46% of U.S. adults said they thought it was a good idea for society. In China, one study found that 51% of respondents approved of facial recognition tech in the public sphere, while in India, 69% of people said in a 2023 report that they supported its use by the police.
But while authorities generally pitch facial recognition as a tool to capture terrorists or wanted murderers, the technology has also emerged as a critical instrument in a very particular context: punishing protesters. "
"Claims that artificial intelligence will help solve the climate crisis are misguided, with the technology instead likely cause rising energy use and turbocharge the spread of climate disinformation, a coalition of environmental groups has warned.
Advances in AI have been touted by big tech companies and the United Nations as a way to help ameliorate global heating, via tools that help track deforestation, identify pollution leaks and track extreme weather events. AI is already being used to predict droughts in Africa and to measure changes to melting icebergs."
"The problem extends beyond the Pegasus project. Installed in Mexico City is one of the largest urban surveillance systems in the Americas: El Centro de Comando, Control, Cómputo, Comunicaciones y Contacto Ciudadano, better known as El C5. The network, connected to panic buttons and command centers, is spread over 1,485 kilometers with software designed to automatically detect license plates. On top of that, the number of installed cameras grew from 18 million to 65 million between 2018 and 2022, with stated plans to add at least an additional 16 million more. Despite its apparent pre-eminence, issues have arisen with the C5, from false identifications to mishandling of personal data. Technological malfunctions have also been shown to impact the outcomes of criminal cases because of the assumption of objectivity that video surveillance supposedly construes.
The sprawling C5 system is dwarfed only by the Titan, an expansive intelligence and security database, both in terms of scale and threat to civil liberties. The software is used by several Mexican state governments to combine location data with other private information, including financial, government, and telecom data, to geolocate individuals across the country in real time. Governmental officials have been criticized for the controversial use of the database to target public figures, but, more problematically, access to Titan-enabled intel can be gained through an underground market, making it a further liability.
The extent to which artificial intelligence has been incorporated into the C5 and Titan is still not clear, but the specter of surveillance remains large and is set to cause more worries with the addition of new smart technologies."
"In a sharply worded warning, the cancer experts say that 'novel solutions' such as new diagnostic tests have been wrongly hyped as 'magic bullets' for the cancer crisis, but 'none address the fundamental issues of cancer as a systems problem'.
A 'common fallacy' of NHS leaders is the assumption that new technologies can reverse inequalities, the authors add. The reality is that tools such as AI can create 'additional barriers for those with poor digital or health literacy'.
'We caution against technocentric approaches without robust evaluation from an equity perspective,' the paper concludes."
"An NHS hospital in west London is pioneering the use of Artificial Intelligence (AI) to help check for skin cancer.
Chelsea and Westminster Hospital said its AI technology has been approved to give patients the all-clear without having to see a doctor.
Once photos are uploaded to the system, the technology analyses and interprets the images, with 99% accuracy in diagnosing benign cases, the hospital said.
Thousands of NHS patients have had urgent cancer checks using the AI tool, freeing up consultants to focus on the most serious cases and bringing down waiting lists.
The system conducts the checks in minutes, with medical photographers taking photos of suspicious moles and lesions using an iPhone and the DERM app, developed by UK firm Skin Analytics."