Skip to main content

Home/ Digit_al Society/ Group items matching "models" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
dr tech

AI now surpasses humans in almost all performance benchmarks - 0 views

  •  
    "The new AI Index report notes that in 2023, AI still struggled with complex cognitive tasks like advanced math problem-solving and visual commonsense reasoning. However, 'struggled' here might be misleading; it certainly doesn't mean AI did badly. Performance on MATH, a dataset of 12,500 challenging competition-level math problems, improved dramatically in the two years since its introduction. In 2021, AI systems could solve only 6.9% of problems. By contrast, in 2023, a GPT-4-based model solved 84.3%. The human baseline is 90%. "
dr tech

Nvidia: what's so good about the tech firm's new AI superchip? | Technology sector | The Guardian - 0 views

  •  
    "Training a massive AI model, the size of GPT-4, would currently take about 8,000 H100 chips, and 15 megawatts of power, Nvidia said - enough to power about 30,000 typical British homes."
dr tech

Warning over use in UK of unregulated AI chatbots to create social care plans | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    "A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care. That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study. "If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model," Green said. "That personal data could be generated and revealed to somebody else." She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard."
dr tech

Google pauses AI-generated images of people after ethnicity criticism | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    "Google has put a temporary block on its new artificial intelligence model producing images of people after it portrayed German second world war soldiers and Vikings as people of colour. The tech company said it would stop its Gemini model generating images of people after social media users posted examples of images generated by the tool that depicted some historical figures - including popes and the founding fathers of the US - in a variety of ethnicities and genders."
dr tech

Seized ransomware network LockBit rewired to expose hackers to world | Cybercrime | The Guardian - 0 views

  •  
    "The organisation is a pioneer of the "ransomware as a service" model, whereby it outsources the target selection and attacks to a network of semi-independent "affiliates", providing them with the tools and infrastructure and taking a commission on the ransoms in return. As well as ransomware, which typically works by encrypting data on infected machines and demanding a payment for providing the decryption key, LockBit copied stolen data and threatened to publish it if the fee was not paid, promising to delete the copies on receipt of a ransom."
dr tech

Cory Doctorow: What Kind of Bubble is AI? - Locus Online - 0 views

  •  
    "Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical. AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments. In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate. But that's not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is "companies will buy our products so they can do more with less." It's not "business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.""
dr tech

Making an image with generative AI uses as much energy as charging your phone - 0 views

  •  
    "In fact, generating an image using a powerful AI model takes as much energy as fully charging your smartphone, according to a new study by researchers at the AI startup Hugging Face and Carnegie Mellon University. However, they found that using an AI model to generate text is significantly less energy-intensive. Creating text 1,000 times only uses as much energy as 16% of a full smartphone charge. "
dr tech

This company is building AI for African languages | MIT Technology Review - 0 views

  •  
    "Abbott's experience mirrors the situation faced by Africans who don't speak English. Many language models like ChatGPT do not perform well for languages with smaller numbers of speakers, especially African ones. But a new venture called Lelapa AI, a collaboration between Abbott and a biomedical engineer named Pelonomi Moiloa, is trying to use machine learning to create tools that specifically work for Africans."
dr tech

Can AI Fairly Decide Who Gets an Organ Transplant? - 0 views

  •  
    "Can AI and analytics be used in a way that improves operational efficiency without jeopardizing our ethical principles? The answer is "yes" - if moral objectives and constraints, now often treated as an afterthought, are considered from the outset when designing models. We will discuss a recent attempt to combine ethics, analytics, and operational efficiency in the world of organ allocation and examine the lessons it holds for other areas of health care and beyond."
dr tech

How digital twins may enable personalised health treatment | Medical research | The Guardian - 0 views

  •  
    "Imagine having a digital twin that gets ill, and can be experimented on to identify the best possible treatment, without you having to go near a pill or a surgeon's knife. Scientists believe that within five to 10 years, "in silico" trials - in which hundreds of virtual organs are used to assess the safety and efficacy of drugs - could become routine, while patient-specific organ models could be used to personalise treatment and avoid medical complications. Digital twins are computational models of physical objects or processes, updated using data from their real-world counterparts. Within medicine, this means combining vast amounts of data about the workings of genes, proteins, cells and whole-body systems with patients' personal data to create virtual models of their organs - and eventually, potentially their entire body"
dr tech

Model says her face was edited with AI to look white: 'It's very dehumanizing' | Fashion | The Guardian - 0 views

  •  
    "A Taiwanese American model says a well-known fashion designer uploaded a digitally altered runway photo that made her appear white. In a TikTok about the incident that has been viewed 1.8m times in the last week, Shereen Wu says Michael Costello, a designer who has worked with Beyoncé, Jennifer Lopez, and Celine Dion, posted a photo to his Instagram from a recent Los Angeles fashion show. The photo depicts Wu in the slinky black ballgown that she walked the runway in - but her face has been changed, made to appear as if she is a white woman."
dr tech

NVIDIA's latest AI model helps robots perform pen spinning tricks as well as humans - 0 views

  •  
    "The use for humans in the world of robotics, even as teachers, is shrinking thanks to AI. NVIDIA Research has announced the creation of Eureka, an AI agent powered by GPT-4 that has trained robots to perform tasks using reward algorithms. Notably, Eureka taught a robotic hand to do pen spinning tricks as well as a human can (honestly, as you can see in the YouTube video below, better than many of us)."
dr tech

Say what: AI can diagnose type 2 diabetes in 10 seconds from your voice - 0 views

  •  
    "Researchers involved in a recent study trained an artificial intelligence (AI) model to diagnose type 2 diabetes in patients after six to 10 seconds of listening to their voice. Canadian medical researchers trained the machine-learning AI to recognise 14 vocal differences in the voice of someone with type 2 diabetes compared to someone without diabetes. The auditory features that the AI focussed on included slight changes in pitch and intensity, which human ears cannot distinguish. This was then paired with basic health data gathered by the researchers, such as age, sex, height and weight. Researchers believe that the AI model will drastically lower the cost for people with diabetes to be diagnosed."
dr tech

23andMe to sell DNA records to drug company | Boing Boing - 0 views

  •  
    "Have you been looking forward to somniferous alkaloid compounds customized to your personal metabolic dependency profile? Good news! 23andMe is selling everyone's DNA to the pharmaceutical industry. GSK Plc will pay 23andMe Holding Co. $20 million for access to the genetic-testing company's vast trove of consumer DNA data, extending a five-year collaboration that's allowed the drugmaker to mine genetic data as it researches new medications."
dr tech

Artists may make AI firms pay a high price for their software's 'creativity' | John Naughton | The Guardian - 0 views

  •  
    "ow, legal redress is all very well, but it's usually beyond the resources of working artists. And lawsuits are almost always retrospective, after the damage has been done. It's sometimes better, as in rugby, to "get your retaliation in first". Which is why the most interesting news of the week was that a team of researchers at the University of Chicago have developed a tool to enable artists to fight back against permissionless appropriation of their work by corporations. Appropriately, it's called Nightshade and it "lets artists add invisible changes to the pixels in their art before they upload it online so that if it's scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways" - dogs become cats, cars become cows, and who knows what else? (Boris Johnson becoming piglet, with added grease perhaps?) It's a new kind of magic. And the good news is that corporations might find it black. Or even deadly."
dr tech

AI firms must be held responsible for harm they cause, 'godfathers' of technology say | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    ""If we build highly advanced autonomous AI, we risk creating systems that autonomously pursue undesirable goals", adding that "we may not be able to keep them in check". Other policy recommendations in the document include: mandatory reporting of incidents where models show alarming behaviour; putting in place measures to stop dangerous models from replicating themselves; and giving regulators the power to pause development of AI models showing dangerous behaviours."
dr tech

Generative AI like Midjourney creates images full of stereotypes - Rest of World - 0 views

  •  
    ""Essentially what this is doing is flattening descriptions of, say, 'an Indian person' or 'a Nigerian house' into particular stereotypes which could be viewed in a negative light," Amba Kak, executive director of the AI Now Institute, a U.S.-based policy research organization, told Rest of World. Even stereotypes that are not inherently negative, she said, are still stereotypes: They reflect a particular value judgment, and a winnowing of diversity. Midjourney did not respond to multiple requests for an interview or comment for this story."
dr tech

OpenAI debates when to release its AI-generated image detector | TechCrunch - 0 views

  •  
    "OpenAI has "discussed and debated quite extensively" when to release a tool that can determine whether an image was made with DALL-E 3, OpenAI's generative AI art model, or not. But the startup isn't close to making a decision anytime soon. That's according to Sandhini Agarwal, an OpenAI researcher who focuses on safety and policy, who spoke with TechCrunch in a phone interview this week. She said that, while the classifier tool's accuracy is "really good" - at least by her estimation - it hasn't met OpenAI's threshold for quality."
dr tech

Google's AI stoplight program is now calming traffic in a dozen cities worldwide - 0 views

  •  
    "Green Light uses machine learning systems to comb through Maps data to calculate the amount of traffic congestion present at a given light, as well as the average wait times of vehicles stopped there. That information is then used to train AI models that can autonomously optimize the traffic timing at that intersection, reducing idle times as well as the amount of braking and accelerating vehicles have to do there. It's all part of Google's goal to help its partners collectively reduce their carbon emissions by a gigaton by 2030."
dr tech

The world's biggest AI models aren't very transparent, Stanford study says - The Verge - 0 views

  •  
    "No prominent developer of AI foundation models - a list including companies like OpenAI and Meta - is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence). Today, Stanford HAI released its Foundation Model Transparency Index, which tracked whether creators of the 10 most popular AI models disclose information about their work and how people use their systems. Among the models it tested, Meta's Llama 2 scored the highest, followed by BloomZ and then OpenAI's GPT-4. But none of them, it turned out, got particularly high marks."
1 - 20 of 122 Next › Last »
Showing 20 items per page