"It comes as a Guardian investigation reveals the human stories behind scams that originate on Meta's platforms, with a nationwide estimate released this week predicting the tech firm's failure to stamp out fraud will cost UK households £250m during 2023.
With someone in the UK said to fall victim to a purchase scam starting on either Facebook or Instagram every seven minutes, the Guardian asked people who had been defrauded on these sites as well as its WhatsApp platform to get in touch.
One Facebook user told us she was defrauded of her life savings and got pulled into debt, losing a total of £70,000, after being duped by an investment scam. While some people lost large amounts of money, a stream of unsuspecting online shoppers reported being conned out of smaller amounts when they placed orders with bogus online shops advertised on Facebook and Instagram."
"In this future, teachers assisted in marking and lesson planning by LLMs would be left with more much-needed time to focus on other elements of their work. However, in a bid to cut costs, the "teaching" of lessons could also be delegated to machines, robbing teachers and students of human interaction.
"Of course, that will be for the less well-off students," Luckin says. "The more well-off students will still have lots of lovely one-to-one human interactions, alongside some very smartly integrated AI."
Luckin instead advocates a future in which technology eases teachers' workloads but does not disrupt their pastoral care - or disproportionately affect students in poorer areas. "That human interaction is something to be cherished, not thrown out," she says."
"Researchers involved in a recent study trained an artificial intelligence (AI) model to diagnose type 2 diabetes in patients after six to 10 seconds of listening to their voice.
Canadian medical researchers trained the machine-learning AI to recognise 14 vocal differences in the voice of someone with type 2 diabetes compared to someone without diabetes.
The auditory features that the AI focussed on included slight changes in pitch and intensity, which human ears cannot distinguish. This was then paired with basic health data gathered by the researchers, such as age, sex, height and weight.
Researchers believe that the AI model will drastically lower the cost for people with diabetes to be diagnosed."
"The January snow lay thick on the Moscow ground, as masked officers of the FSB - Russia's fearsome security agency - prepared to smash down the doors at one of 25 addresses they would raid that day.
Their target was REvil, a shadowy conclave of hackers that claimed to have stolen more than $100m (£74m) a year through "ransomware" attacks, before suddenly disappearing.
As group members were led away in cuffs, FSB officers gathered crypto-wallets containing untold volumes of digital currency such as bitcoin. Others used money-counting machines to tot up dozens of stacks of hundred dollar bills."
"My prediction failed. For a decade and a half, Facebook resisted the fate of all the social networks that preceded it. In hindsight, it's easy to see why: it cheated. The company used investor cash to buy and neutralize competitors ("Kids are leaving Facebook for Insta? Fine, we'll buy Insta. We know you value choice!"). It allegedly spied on users through the deceptive use of apps such as Onavo and exploited the intelligence to defeat rivals. More than anything, it ratcheted up "switching costs.""
"Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency.
Though I don't have a certain answer to this question, I am skeptical. AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments.
In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate.
But that's not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is "companies will buy our products so they can do more with less." It's not "business customers will buy our products so their products will cost more to make, but will be of higher quality.""
"The tools - which aim to cut the time and cost of filtering mountains of job applications and drive workplace efficiency - are enticing to employers. But Schellmann concludes they are doing more harm than good. Not only are many of the hiring tools based on troubling pseudoscience (for example, the idea that the intonation of our voice can predict how successful we will be in a job doesn't stand up, says Schellmann), they can also discriminate."
"AI-generated voices are extremely convincing and can converse without the obvious repetitions and scripted lines, making a dream tool for fraudsters, marketers and political campaigns. The Federal Communications Commission sees the costs of all this coming and is banning their use in robocalls.
"Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform votes," said FCC Chairwoman Jessica Rosenworcel in a press release. "State attorneys general will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.""
"But there is another set of Facebook stories that shines even more glaring light on the company's mismatch of power and responsibility. A good place to start is Sri Lanka: one of many countries where "fake news" is not the slightly jokey notion regularly played up by Trump, but sometimes a matter of life and death."
"An array of free website-building tools, many offered by ad-tech and ad-funded companies, has led to a dizzying number of trackers loading on users' browsers, even when they visit sites where privacy would seem paramount, an investigation by The Markup has found. Some load without the website operators' explicit knowledge-or disclosure to users."
“Many of the challenges we face today – from conflicts to climate chaos and the cost-of-living crisis – are the result of what is a male-dominated world with a male-dominated culture, taking the key decisions that guide our world,”
“Policymakers must create - and in some circumstances must reinforce to create - transformative change by promoting women and girls’ equal rights and opportunities to learn; by dismantling barriers and smashing glass ceilings,” he said.
The digital divide and the "glass ceiling" both exacerbate gender inequality and prevent women from achieving their full potential. A diversified strategy may be needed to address these problems, including policy changes to advance gender equality, financial support for education and training, and initiatives (transformative change) to overcome prejudice and stereotypes that support gender inequality.