Skip to main content

Home/ Digit_al Society/ Group items tagged human ai

Rss Feed Group items tagged

dr tech

AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find - 0 views

  •  
    ""As expected, we found evidence of a performance improvement over generations due to social learning," the researchers wrote. "Adding an algorithm with a different problem-solving bias than humans temporarily improved human performance but improvements were not sustained in following generations. While humans did copy solutions from the algorithm, they appeared to do so at a lower rate than they copied other humans' solutions with comparable performance." Brinkmann told Motherboard that while they were surprised superior solutions weren't more commonly adopted, this was in line with other research suggesting human biases in decision-making persist despite social learning. Still, the team is optimistic that future research can yield insight into how to amend this."
smilingoldman

'Disinformation on steroids': is the US prepared for AI's influence on the election? | ... - 0 views

  • Already this year, a robocall generated using artificial intelligence targeted New Hampshire voters in the January primary, purporting to be President Joe Biden and telling them to stay home in what officials said could be the first attempt at using AI to interfere with a US election. The “deepfake” calls were linked to two Texas companies, Life Corporation and Lingo Telecom.
  • It’s not clear if the deepfake calls actually prevented voters from turning out, but that doesn’t really matter, said Lisa Gilbert, executive vice-president of Public Citizen, a group that’s been pushing for federal and state regulation of AI’s use in politics.
  • Examples of what could be ahead for the US are happening all over the world. In Slovakia, fake audio recordings may have swayed an election in what serves as a “frightening harbinger of the sort of interference the United States will likely experience during the 2024 presidential election”, CNN reported. In Indonesia, an AI-generated avatar of a military commander helped rebrand the country’s defense minister as a “chubby-cheeked” man who “makes Korean-style finger hearts and cradles his beloved cat, Bobby, to the delight of Gen Z voters”, Reuters reported. In India, AI versions of dead politicians have been brought back to compliment elected officials, according to Al Jazeera.
  • ...1 more annotation...
  • she said, “what if AI could do all this? Then maybe I shouldn’t be trusting everything that I’m seeing.”
dr tech

Researchers shut down AI that invented its own language - 0 views

  •  
    " The observations made at Facebook are the latest in a long line of similar cases. In each instance, an AI being monitored by humans has diverged from its training in English to develop its own language. The resulting phrases appear to be nonsensical gibberish to humans but contain semantic meaning when interpreted by AI "agents." "
dr tech

Man beats machine at Go in human victory over AI | Ars Technica - 0 views

  •  
    "Kellin Pelrine, an American player who is one level below the top amateur ranking, beat the machine by taking advantage of a previously unknown flaw that had been identified by another computer. But the head-to-head confrontation in which he won 14 of 15 games was undertaken without direct computer support. The triumph, which has not previously been reported, highlighted a weakness in the best Go computer programs that is shared by most of today's widely used AI systems, including the ChatGPT chatbot created by San Francisco-based OpenAI. The tactics that put a human back on top on the Go board were suggested by a computer program that had probed the AI systems looking for weaknesses. The suggested plan was then ruthlessly delivered by Pelrine."
dr tech

The world's biggest AI models aren't very transparent, Stanford study says - The Verge - 0 views

  •  
    "No prominent developer of AI foundation models - a list including companies like OpenAI and Meta - is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence). Today, Stanford HAI released its Foundation Model Transparency Index, which tracked whether creators of the 10 most popular AI models disclose information about their work and how people use their systems. Among the models it tested, Meta's Llama 2 scored the highest, followed by BloomZ and then OpenAI's GPT-4. But none of them, it turned out, got particularly high marks."
dr tech

Making an image with generative AI uses as much energy as charging your phone - 0 views

  •  
    "In fact, generating an image using a powerful AI model takes as much energy as fully charging your smartphone, according to a new study by researchers at the AI startup Hugging Face and Carnegie Mellon University. However, they found that using an AI model to generate text is significantly less energy-intensive. Creating text 1,000 times only uses as much energy as 16% of a full smartphone charge. "
dr tech

The AI Delusion: An Unbiased General Purpose Chatbot - 0 views

  •  
    "Can AI ever be unbiased? As AI systems become more integrated into our daily lives, it's crucial that we understand the complexities of bias and how it impacts these technologies. From chatbots to hiring algorithms, the potential for AI to perpetuate and even amplify existing biases is a genuine concern. "
dr tech

AI Software Creates "New" Nirvana Song "Drowned in the Sun" | Consequence of Sound - 0 views

  •  
    "Over the Bridge hopes the project emphasizes exactly how much work goes into creating AI music. "There's an inordinate amount of human hands at the beginning, middle and end to create something like this," explained Michael Scriven, a rep for Lemmon Entertainment whose CEO is on Over the Bridge's board of directors. Scriven added, "A lot of people may think [AI] is going to replace musicians at some point, but at this point, the number of humans that are required just to get to a point where a song is listenable is actually quite significant.""
dr tech

Why AI Will Save the World | Andreessen Horowitz - 0 views

  •  
    "Further, human intelligence is the lever that we have used for millennia to create the world we live in today: science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, morality. Without the application of intelligence on all these domains, we would all still be living in mud huts, scratching out a meager existence of subsistence farming. Instead we have used our intelligence to raise our standard of living on the order of 10,000X over the last 4,000 years. What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence - and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars - much, much better from here."
dr tech

'The Godfather of AI' leaves Google and warns of danger ahead - TODAY - 0 views

  •  
    "His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will "not be able to know what is true anymore." He is also worried that AI technologies will in time upend the job market. Today, chatbots such as ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. "It takes away the drudge work," he said. "It might take away more than that." Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow AI systems not only to generate their own computer code but actually to run that code on their own. And he fears a day when truly autonomous weapons - those killer robots - become reality."
dr tech

Human-like programs abuse our empathy - even Google engineers aren't immune | Emily M B... - 0 views

  •  
    "That is why we must demand transparency here, especially in the case of technology that uses human-like interfaces such as language. For any automated system, we need to know what it was trained to do, what training data was used, who chose that data and for what purpose. In the words of AI researchers Timnit Gebru and Margaret Mitchell, mimicking human behaviour is a "bright line" - a clear boundary not to be crossed - in computer software development. We treat interactions with things we perceive as human or human-like differently. With systems such as LaMDA we see their potential perils and the urgent need to design systems in ways that don't abuse our empathy or trust."
dr tech

Pause Giant AI Experiments: An Open Letter - Future of Life Institute - 0 views

  •  
    "Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now."
dr tech

Are you 80% angry and 2% sad? Why 'emotional AI' is fraught with problems | Artificial ... - 0 views

  •  
    ""An emotionally intelligent human does not usually claim they can accurately put a label on everything everyone says and tell you this person is currently feeling 80% angry, 18% fearful, and 2% sad," says Edward B Kang, an assistant professor at New York University writing about the intersection of AI and sound. "In fact, that sounds to me like the opposite of what an emotionally intelligent person would say." Adding to this is the notorious problem of AI bias. "Your algorithms are only as good as the training material," Barrett says. "And if your training material is biased in some way, then you are enshrining that bias in code.""
dr tech

A debate between AI experts shows a battle over the technology's future - MIT Technolog... - 0 views

  •  
    "The reason to look at humans is because there are certain things that humans do much better than deep-learning systems. That doesn't mean humans will ultimately be the right model. We want systems that have some properties of computers and some properties that have been borrowed from people. We don't want our AI systems to have bad memory just because people do. But since people are the only model of a system that can develop a deep understanding of something-literally the only model we've got-we need to take that model seriously."
dr tech

Scientists Increasingly Can't Explain How AI Works - 0 views

  •  
    "There's a similar problem in artificial intelligence: The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)-made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains-often seem to mirror not just human intelligence but also human inexplicability."
dr tech

AI Reveals the Most Human Parts of Writing | WIRED - 0 views

  •  
    "The role of AI writing systems as drafting buddies is a big departure from how writers typically get help, yet so far it is their biggest selling point and use case. Most writing tools available today will do some drafting for you, either by continuing where you left off or responding to a more specific instruction. SudoWrite, a popular AI writing tool for novelists, does all of these, with options to "write" where you left off, "describe" a highlighted noun, or "brainstorm" ideas based on a situation you describe. Systems like Jasper.ai or Lex will complete your paragraph or draft copy based on instructions, and Laika is similar but more focused on fiction and drama. "
dr tech

The New Age of Hiring: AI Is Changing the Game for Job Seekers - CNET - 0 views

  •  
    "If you've been job hunting recently, chances are you've interacted with a resume robot, a nickname for an Applicant Tracking System, or ATS. In its most basic form, an ATS acts like an online assistant, helping hiring managers write job descriptions, scan resumes and schedule interviews. As artificial intelligence advances, employers are increasingly relying on a combination of predictive analytics, machine learning and complex algorithms to sort through candidates, evaluate their skills and estimate their performance. Today, it's not uncommon for applicants to be rejected by a robot before they're connected with an actual human in human resources. The job market is ripe for the explosion of AI recruitment tools. Hiring managers are coping with deflated HR budgets while confronting growing pools of applicants, a result of both the economic downturn and the post-pandemic expansion of remote work. As automated software makes pivotal decisions about our employment, usually without any oversight, it's posing fundamental questions about privacy, accountability and transparency."
dr tech

MIT's 'PhotoGuard' protects your images from malicious AI edits | Engadget - 0 views

  •  
    "PhotoGuard works by altering select pixels in an image such that they will disrupt an AI's ability to understand what the image is. Those "perturbations," as the research team refers to them, are invisible to the human eye but easily readable by machines. The "encoder" attack method of introducing these artifacts targets the algorithmic model's latent representation of the target image - the complex mathematics that describes the position and color of every pixel in an image - essentially preventing the AI from understanding what it is looking at."
dr tech

Incoherent, creepy and gorgeous: we asked six leading artists to make work using AI - a... - 0 views

  •  
    "Until recently, I was deeply sceptical of the idea of AI art. I saw it as hype and casuistry, and with some cause: widely publicised efforts such as Ai-Da the robot artist obviously exaggerate the independence of the machine and play on our fascination with sentient artificial beings. But now the dream is coming true, at least in art. And art is surely one of the most inimitable expressions of the human mind."
dr tech

'The Gospel': how Israel uses AI to select bombing targets in Gaza | Israel | The Guardian - 0 views

  •  
    "Sources familiar with how AI-based systems have been integrated into the IDF's operations said such tools had significantly sped up the target creation process. "We prepare the targets automatically and work according to a checklist," a source who previously worked in the target division told +972/Local Call. "It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate." A separate source told the publication the Gospel had allowed the IDF to run a "mass assassination factory" in which the "emphasis is on quantity and not on quality". A human eye, they said, "will go over the targets before each attack, but it need not spend a lot of time on them". For some experts who research AI and international humanitarian law, an acceleration of this kind raises a number of concerns."
‹ Previous 21 - 40 of 200 Next › Last »
Showing 20 items per page