Skip to main content

Home/ Digit_al Society/ Group items tagged generative

Rss Feed Group items tagged

dr tech

EU agrees 'historic' deal with world's first laws to regulate AI | European Union | The... - 0 views

  •  
    "The European Parliament secured a ban on use of real-time surveillance and biometric technologies including emotional recognition but with three exceptions, according to Breton. It would mean police would be able to use the invasive technologies only in the event of an unexpected threat of a terrorist attack, the need to search for victims and in the prosecution of serious crime."
dr tech

Google's AI stoplight program is now calming traffic in a dozen cities worldwide - 0 views

  •  
    "Green Light uses machine learning systems to comb through Maps data to calculate the amount of traffic congestion present at a given light, as well as the average wait times of vehicles stopped there. That information is then used to train AI models that can autonomously optimize the traffic timing at that intersection, reducing idle times as well as the amount of braking and accelerating vehicles have to do there. It's all part of Google's goal to help its partners collectively reduce their carbon emissions by a gigaton by 2030."
dr tech

Generative AI like Midjourney creates images full of stereotypes - Rest of World - 0 views

  •  
    ""Essentially what this is doing is flattening descriptions of, say, 'an Indian person' or 'a Nigerian house' into particular stereotypes which could be viewed in a negative light," Amba Kak, executive director of the AI Now Institute, a U.S.-based policy research organization, told Rest of World. Even stereotypes that are not inherently negative, she said, are still stereotypes: They reflect a particular value judgment, and a winnowing of diversity. Midjourney did not respond to multiple requests for an interview or comment for this story."
dr tech

Yepic fail: This startup promised not to make deepfakes without consent, but did anyway... - 1 views

  •  
    "U.K.-based startup Yepic AI claims to use "deepfakes for good" and promises to "never reenact someone without their consent." But the company did exactly what it claimed it never would. In an unsolicited email pitch to a TechCrunch reporter, a representative for Yepic AI shared two "deepfaked" videos of the reporter, who had not given consent to having their likeness reproduced. Yepic AI said in the pitch email that it "used a publicly available photo" of the reporter to produce two deepfaked videos of them speaking in different languages. The reporter requested that Yepic AI delete the deepfaked videos it created without permission."
dr tech

Microsoft offers politicians protection against deepfakes - The Verge - 0 views

  •  
    "Microsoft will also launch Content Credentials for digital watermarking, create teams to work with political campaigns on cybersecurity and AI, and endorse a bill banning AI in political ads."
dr tech

AI suggested 40,000 new possible chemical weapons in just six hours - The Verge - 0 views

  •  
    "Researchers put AI normally used to search for helpful drugs into a kind of "bad actor" mode to show how easily it could be abused at a biological arms control conference. All the researchers had to do was tweak their methodology to seek out, rather than weed out toxicity. The AI came up with tens of thousands of new substances, some of which are similar to VX, the most potent nerve agent ever developed. Shaken, they published their findings this month in the journal Nature Machine Intelligence."
dr tech

The latest marketing tactic on LinkedIn: AI-generated faces : NPR - 0 views

  •  
    ""The face jumped out at me as being fake," said DiResta, a veteran researcher who has studied Russian disinformation campaigns and anti-vaccine conspiracies. To her trained eye, these anomalies were red flags that Ramsey's photo had likely been created by artificial intelligence."
dr tech

Artificial Disinformation: Can Chatbots Destroy Trust on the Internet? | by Nabil Aloua... - 0 views

  •  
    ""If these systems aren't used to create propaganda and misinformation yet, I don't know what certain governments are doing with their time," ex-Google engineer Blake Lemoine said. "We're letting the engineering get ahead of the science. We're building a thing that we literally don't understand.""
dr tech

OpenAI CEO Sam Altman warns that other A.I. developers working on ChatGPT-like tools wo... - 0 views

  •  
    "In early December, Musk called ChatGPT "scary good" and warned, "We are not far from dangerously strong AI." But Altman has been warning the public just as much, if not more, even as he presses ahead with OpenAI's work. Last month, he worried about "how people of the future will view us" in a series of tweets. "We also need enough time for our institutions to figure out what to do," he wrote. "Regulation will be critical and will take time to figure out…having time to understand what's happening, how people want to use these tools, and how society can co-evolve is critical.""
dr tech

Game Over for Maths A-level - Conrad Wolfram - 0 views

  •  
    "The combination of ChatGPT with its Wolfram plug-in just scored 96% in a UK Maths A-level paper, the exam taken at the end of school, as a crucial metric for university entrance. (That compares to 43% for ChatGPT alone). If this doesn't shock you, it should. Maths A-level (like its equivalent in many other countries) is held up as the required and essential qualification for much of our populations-the way to be prepared for our upcoming AI age. And yet, here it is, done by those very AIs, better than most of our students."
dr tech

Italy curbs ChatGPT, starts probe over privacy concerns - 0 views

  •  
    "OpenAI has taken ChatGPT offline in Italy after the government's Data Protection Authority on Friday temporarily banned the chatbot and launched a probe over the artificial intelligence application's suspected breach of privacy rules."
dr tech

'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says - 0 views

  •  
    "A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app's chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting. "
dr tech

The Expanding Dark Forest and Generative AI - 0 views

  •  
    "The dark forest theory of the web points to the increasingly life-like but life-less state of being online. Most open and publicly available spaces on the web are overrun with bots, advertisers, trolls, data scrapers, clickbait, keyword-stuffing "content creators," and algorithmically manipulated junk. It's like a dark forest that seems eerily devoid of human life - all the living creatures are hidden beneath the ground or up in trees. If they reveal themselves, they risk being attacked by automated predators."
dr tech

Cory Doctorow: What Kind of Bubble is AI? - Locus Online - 0 views

  •  
    "Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical. AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments. In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate. But that's not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is "companies will buy our products so they can do more with less." It's not "business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.""
dr tech

Wikipedia is facing an existential crisis. Can gen Z save it? | Stephen Harrison | The ... - 0 views

  •  
    "The world's most important knowledge platform needs young editors to rescue it from chatbots - and its own tired practices"
dr tech

'The first TikTok election': are Sunak and Starmer's digital campaigns winning over vot... - 0 views

  •  
    "Security fears about Chinese influence over Bytedance, TikTok's owner, are undoubtedly part of the reason why UK politicians have been reluctant to get involved, and the political context is also different - Biden is reacting to Donald Trump's social media clout - but US strategists such as Teddy Goff have suggested that building up an army of TikTokers who can share and amplify political messages is vital."
dr tech

Will the future of transportation be robotaxis - or your own self-driving car? | Techn... - 0 views

  •  
    Tenant-screening systems like SafeRent are often used in place of humans as a way to 'avoid engaging' directly with the applicants and pass the blame for a denial to a computer system, said Todd Kaplan, one of the attorneys representing Louis and the class of plaintiffs who sued the company. The property management company told Louis the software alone decided to reject her, but the SafeRent report indicated it was the management company that set the threshold for how high someone needed to score to have their application accepted. Louis and the other named plaintiff alleged SafeRent's algorithm disproportionately scored Black and Hispanic renters who use housing vouchers lower than white applicants. SafeRent has settled. In addition to making a $2.3m payment, the company has agreed to stop using a scoring system or make any kind of recommendation when it comes to prospective tenants who used housing vouchers for five years.
« First ‹ Previous 261 - 278 of 278
Showing 20 items per page