"The tablets are scheduled to be introduced next year, and by 2028, teachers are supposed to be using these AI textbooks for all subjects except music, art, physical education and ethics. The government hasn't shared many details about how it will all work, except that the material is supposed to be customized for different speeds of learning, with teachers using dashboards to monitor how students are doing.
In response, more than 50,000 parents have signed a petition demanding that the government focus less on new tech and more on students' overall well-being: "We, as parents, are already encountering many issues at unprecedented levels arising from [our children's] exposure to digital devices."
Lee Sun-youn, a mother of two, told FT, "I am worried that too much usage of digital devices could negatively affect their brain development, concentration span and ability to solve problems - they already use smartphones and tablets too much.""
"So we've entered a world in which the CEOs of major social network are arrested and detained. That's quite a shift - and it didn't come in a way anyone was expecting."
"Aussies bothered by their bosses at home can now rest assured they cannot be punished for ignoring such after-hours demands. Lawmakers there passed a "right to disconnect" bill designed to end the creep of work into home life-and an explosion of unpaid overtime."
"Students using ChatGPT solved 48% more of the problems correctly, and those with the AI tutor solved 127% more problems correctly, according to the report.
But their peers who did not use ChatGPT outscored them on the related tests. In fact, students using ChatGPT scored 17% worse on tests.
Kids working on their own performed the same on practice assignments and tests.
Researchers told The Hechinger Report that students are using the chatbot as a "crutch" and that it can "substantially inhibit learning.""
"Data center emissions probably 662% higher than big tech claims. Can it keep up the ruse?
Emissions from in-house data centers of Google, Microsoft, Meta and Apple may be 7.62 times higher than official tally"
"Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn't have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning. "
"National police agency says it is investigating 513 cases of deepfake pornography as a new scandal grips the country
Raphael Rashid in Seoul and Justin McCurry in Tokyo
Fri 13 Sep 2024 21.00 BST
Share
The anger was palpable. For the second time in just a few years, South Korean women took to the streets of Seoul to demand an end to sexual abuse. When the country spearheaded Asia's #MeToo movement, the culprit was molka - spy cams used to record women without their knowledge. Now their fury was directed at an epidemic of deepfake pornography."
"The mother of a teenager who killed himself after becoming obsessed with an artificial intelligence-powered chatbot now accuses its maker of complicity in his death.
Megan Garcia filed a civil suit against Character.ai, which makes a customizable chatbot for role-playing, in Florida federal court on Wednesday, alleging negligence, wrongful death and deceptive trade practices. Her son Sewell Setzer III, 14, died in Orlando, Florida, in February. In the months leading up to his death, Setzer used the chatbot day and night, according to Garcia."
""I'm in shock, there are no words right now. I've been in the [creative] industry for over 20 years and I have never felt so violated and vulnerable," said Mark Torres, a creative director based in London, who appears in the blue shirt in the fake videos.
"I don't want anyone viewing me like that. Just the fact that my image is out there, could be saying anything - promoting military rule in a country I did not know existed. People will think I am involved in the coup," Torres added after being shown the video by the Guardian for the first time."
"In one activity, my students drafted a paragraph in class, fed their work to ChatGPT with a revision prompt, and then compared the output with their original writing. However, these types of comparative analyses failed because most of my students were not developed enough as writers to analyze the subtleties of meaning or evaluate style. "It makes my writing look fancy," one PhD student protested when I pointed to weaknesses in AI-revised text.
My students also relied heavily on AI-powered paraphrasing tools such as Quillbot. Paraphrasing well, like drafting original research, is a process of deepening understanding. Recent high-profile examples of "duplicative language" are a reminder that paraphrasing is hard work. It is not surprising, then, that many students are tempted by AI-powered paraphrasing tools. These technologies, however, often result in inconsistent writing style, do not always help students avoid plagiarism, and allow the writer to gloss over understanding. Online paraphrasing tools are useful only when students have already developed a deep knowledge of the craft of writing."
"It seems pointless to ask whether the spreadsheet is a good or a bad thing. But one prominent contrarian, the technology columnist John C Dvorak, had no doubts last week as he contemplated VisiCalc's 30th anniversary.
'The spreadsheet', he fumed, 'created the "what if" society. Instead of moving forward and progressing normally, the what-if society that questions each and every move we make. It second-guesses everything'. Worse still, he thinks, the spreadsheet has elevated the once-lowly bean-counter to the board and enabled accountants to run the world."
"First of all, in the short term I agree with arguments that comparative advantage will continue to keep humans relevant and in fact increase their productivity, and may even in some ways level the playing field between humans. As long as AI is only better at 90% of a given job, the other 10% will cause humans to become highly leveraged, increasing compensation and in fact creating a bunch of new human jobs complementing and amplifying what AI is good at, such that the "10%" expands to continue to employ almost everyone. In fact, even if AI can do 100% of things better than humans, but it remains inefficient or expensive at some tasks, or if the resource inputs to humans and AI's are meaningfully different, then the logic of comparative advantage continues to apply. One area humans are likely to maintain a relative (or even absolute) advantage for a significant time is the physical world. Thus, I think that the human economy may continue to make sense even a little past the point where we reach "a country of geniuses in a datacenter"."
"Writing in the journal Science, Summerfield and colleagues from Google DeepMind report how they built the "Habermas Machine" - an AI system named after the German philosopher Jürgen Habermas.
The system works by taking written views of individuals within a group and using them to generate a set of group statements designed to be acceptable to all. Group members can then rate these statements, a process that not only trains the system but allows the statement with the greatest endorsement to be selected."
"In addition to their usual job of grubbing out bad human edits, they're having to spend an increasing amount of their time trying to weed out AI filler.
404 Media has talked to Ilyas Lebleu, an editor at the crowdsourced encyclopedia who was involved in founding the "WikiProject AI Cleanup" project. The group is trying to come up with best practices to detect machine-generated contributions. (And no, before you ask, AI is useless for this.)"
"AI may displace 3m jobs but long-term losses 'relatively modest', says Tony Blair's thinktank
Rise in unemployment in low hundreds of thousands as technology creates roles, Tony Blair Institute suggests"
"But what is pitched as a more convenient way of looking up information online has prompted scrutiny over how and where these chatbots select the information they provide. Looking into the sort of evidence that large language models (LLMs, the engines on which chatbots are built) find most convincing, three computer science researchers from the University of California, Berkeley, found current chatbots overrely on the superficial relevance of information. They tend to prioritise text that includes pertinent technical language or is stuffed with related keywords, while ignoring other features we would usually use to assess trustworthiness, such as the inclusion of scientific references or objective language free of personal bias."
"The breakdown in negotiations resulted in Meta blocking all news sources on Facebook in Canada "recklessly and dangerously" as all 10 provinces and three territories in the country burned, Canada's heritage minister, Pascale St-Onge, told Guardian Australia.
"Facebook is leaving disinformation and misinformation to spread on their platform, while choosing to block access to reliable, high-quality, independent journalism," St-Onge said.
"Facebook is just leaving more room for misinformation during need-to-know situations like wildfires, emergencies, local elections and other critical times for people to make decisions on matters that affect them.""
"The climate crisis could prove AI's greatest challenge. While Google publicises AI-driven advances in flooding, wildfire and heatwave forecasts, like many big tech companies, it uses more energy than many countries. Today's large models are a major culprit. It can take 10 gigawatt-hours of power to train a single large language model like OpenAI's ChatGPT, enough to supply 1,000 US homes for a year."