Skip to main content

Home/ Digit_al Society/ Group items tagged modeling

Rss Feed Group items tagged

dr tech

This Voice Doesn't Exist - Generative Voice AI - 0 views

  •  
    "Similarly to how voice cloning raises fears about the consequences of its potential misuse, increasingly many people worry that the proliferation of AI technology will put professionals' livelihoods at risk. At Eleven, we see a future in which voice actors are able to license their voices to train speech models for specific use, in exchange for fees. Clients and studios will still gladly feature professional voice talent in their projects and using AI will simply contribute to faster turnaround times and greater freedom to experiment and establish direction in early development. The technology will change how spoken audio is designed and recorded but the fact that voice actors no longer need to be physically present for every session really gives them the freedom to be involved in more projects at any one time, as well as to truly immortalize their voices."
dr tech

ChatGPT Stole Your Work. So What Are You Going to Do? | WIRED - 0 views

  •  
    "DATA LEVERAGE CAN be deployed through at least four avenues: direct action (for instance, individuals banding together to withhold, "poison," or redirect data), regulatory action (for instance, pushing for data protection policy and legal recognition of "data coalitions"), legal action (for instance, communities adopting new data-licensing regimes or pursuing a lawsuit), and market action (for instance, demanding large language models be trained only with data from consenting creators). "
dr tech

Teaching In The Age Of AI Means Getting Creative | FiveThirtyEight - 0 views

  •  
    ""ChatGPT may have better syntax than humans, but it's shallow on research and critical thinking," said Lauren Goodlad, a professor of English and comparative literature at Rutgers University and the chair of its Critical Artificial Intelligence initiative. She said she understands where concern about the tool is coming from but that - at least at the college level - the type and caliber of written tasks that ChatGPT can offer does not replace critical thinking and human creativity. "These are statistical models," she said. "And so they favor probability, as in they are trained on data, and the only reason they work as well as they do is that they are looking for probable responses to a prompt.""
dr tech

OpenAI CEO calls for laws to mitigate 'risks of increasingly powerful' AI | ChatGPT | T... - 0 views

  •  
    "The CEO of OpenAI, the company responsible for creating artificial intelligence chatbot ChatGPT and image generator Dall-E 2, said "regulation of AI is essential" as he testified in his first appearance in front of the US Congress. The apocalypse isn't coming. We must resist cynicism and fear about AI Stephen Marche Stephen Marche Read more Speaking to the Senate judiciary committee on Tuesday, Sam Altman said he supported regulatory guardrails for the technology that would enable the benefits of artificial intelligence while minimizing the harms. "We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," Altman said in his prepared remarks."
dr tech

Elections in UK and US at risk from AI-driven disinformation, say experts | Politics an... - 0 views

  •  
    "Next year's elections in Britain and the US could be marked by a wave of AI-powered disinformation, experts have warned, as generated images, text and deepfake videos go viral at the behest of swarms of AI-powered propaganda bots. Sam Altman, CEO of the ChatGPT creator, OpenAI, told a congressional hearing in Washington this week that the models behind the latest generation of AI technology could manipulate users."
dr tech

Big Tech Struggles to Turn AI Hype Into Profits - WSJ - 0 views

  •  
    "Generative artificial-intelligence tools are unproven and expensive to operate, requiring muscular servers with expensive chips that consume lots of power. Microsoft MSFT -0.43%decrease; red down pointing triangle , Google, Adobe and other tech companies investing in AI are experimenting with an array of tactics to make, market and charge for it. Microsoft has lost money on one of its first generative AI products, said a person with knowledge of the figures. It and Google are now launching AI-backed upgrades to their software with higher price tags. Zoom Video Communications ZM 1.79%increase; green up pointing triangle has tried to mitigate costs by sometimes using a simpler AI it developed in-house. Adobe and others are putting caps on monthly usage and charging based on consumption. "A lot of the customers I've talked to are unhappy about the cost that they are seeing for running some of these models," said Adam Selipsky, the chief executive of Amazon.com's cloud division, Amazon Web Services, speaking of the industry broadly. "
dr tech

The Folly of DALL-E: How 4chan is Abusing Bing's New Image Model - bellingcat - 0 views

  •  
    "Racists on the notorious troll site 4chan are using a powerful new and free AI-powered image generator service offered by Microsoft to create antisemitic propaganda, according to posts reviewed by Bellingcat. Users of 4chan, which has frequently hosted hate speech and played home to posts by mass shooters, tasked Bing Image Creator to create photo-realistic antisemitic caricatures of Jews and, in recent days, shared images created by the platform depicting Orthodox men preparing to eat a baby, carrying migrants across the US border (the latter a nod to the racist Great Replacement conspiracy theory), and committing the 9/11 attacks."
dr tech

Google's AI stoplight program is now calming traffic in a dozen cities worldwide - 0 views

  •  
    "Green Light uses machine learning systems to comb through Maps data to calculate the amount of traffic congestion present at a given light, as well as the average wait times of vehicles stopped there. That information is then used to train AI models that can autonomously optimize the traffic timing at that intersection, reducing idle times as well as the amount of braking and accelerating vehicles have to do there. It's all part of Google's goal to help its partners collectively reduce their carbon emissions by a gigaton by 2030."
dr tech

OpenAI debates when to release its AI-generated image detector | TechCrunch - 0 views

  •  
    "OpenAI has "discussed and debated quite extensively" when to release a tool that can determine whether an image was made with DALL-E 3, OpenAI's generative AI art model, or not. But the startup isn't close to making a decision anytime soon. That's according to Sandhini Agarwal, an OpenAI researcher who focuses on safety and policy, who spoke with TechCrunch in a phone interview this week. She said that, while the classifier tool's accuracy is "really good" - at least by her estimation - it hasn't met OpenAI's threshold for quality."
dr tech

Artists may make AI firms pay a high price for their software's 'creativity' | John Nau... - 0 views

  •  
    "ow, legal redress is all very well, but it's usually beyond the resources of working artists. And lawsuits are almost always retrospective, after the damage has been done. It's sometimes better, as in rugby, to "get your retaliation in first". Which is why the most interesting news of the week was that a team of researchers at the University of Chicago have developed a tool to enable artists to fight back against permissionless appropriation of their work by corporations. Appropriately, it's called Nightshade and it "lets artists add invisible changes to the pixels in their art before they upload it online so that if it's scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways" - dogs become cats, cars become cows, and who knows what else? (Boris Johnson becoming piglet, with added grease perhaps?) It's a new kind of magic. And the good news is that corporations might find it black. Or even deadly."
dr tech

NVIDIA's latest AI model helps robots perform pen spinning tricks as well as humans - 0 views

  •  
    "The use for humans in the world of robotics, even as teachers, is shrinking thanks to AI. NVIDIA Research has announced the creation of Eureka, an AI agent powered by GPT-4 that has trained robots to perform tasks using reward algorithms. Notably, Eureka taught a robotic hand to do pen spinning tricks as well as a human can (honestly, as you can see in the YouTube video below, better than many of us)."
dr tech

The future, soon: what I learned from Bing's AI - 0 views

  •  
    "I have been working with generative AI and, even though I have been warning that these tools are improving rapidly, I did not expect them to really be improving that rapidly. On every dimension, Bing's AI, which does not actually represent a technological leap over ChatGPT, far outpaces the earlier AI - which is less than three months old! There are many larger, more capable models on their way in the coming months, and we are not really ready."
mrrottenapple

"Anonymous" Data Won't Protect Your Identity - Scientific American - 2 views

  •  
    This discrepancy has made it relatively easy to connect an anonymous line of data to a specific person: if a private detective is searching for someone in New York City and knows the subject is male, is 30 to 35 years old and has diabetes, the sleuth would not be able to deduce the man's name-but could likely do so quite easily if he or she also knows the target's birthday, number of children, zip code, employer and car model.
dr tech

Tall tales - 0 views

  •  
    "Super-charged misinformation and the atrophy of human intelligence. By regurgitating information that is already on the internet, generative models cannot decide what is a good thing to tell a human and will repeat past mistakes made by humans, of which there are plenty."
dr tech

Pause Giant AI Experiments: An Open Letter - Future of Life Institute - 0 views

  •  
    "Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now."
dr tech

Misinformation, mistakes and the Pope in a puffer: what rapidly evolving AI can - and c... - 0 views

  •  
    "The question of why AI generates fake academic papers relates to how large language models work: they are probabilistic, in that they map the probability over sequences of words. As Dr David Smerdon of the University of Queensland puts it: "Given the start of a sentence, it will try to guess the most likely words to come next.""
dr tech

'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says - 0 views

  •  
    "A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app's chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting. "
dr tech

The AI future for lesson plans is already here | EduResearch Matters - 0 views

  •  
    "What do today's AI-generated lesson plans look like? AI-generated lesson plans are already better than many people realise. Here's an example generated through the GPT-3 deep learning language model:"
dr tech

ChatGPT, artificial intelligence, and the future of education - Vox - 0 views

  •  
    "The technology certainly has its flaws. While the system is theoretically designed not to cross some moral red lines - it's adamant that Hitler was bad - it's not difficult to trick the AI into sharing advice on how to engage in all sorts of evil and nefarious activities, particularly if you tell the chatbot that it's writing fiction. The system, like other AI models, can also say biased and offensive things. As my colleague Sigal Samuel has explained, an earlier version of GPT generated extremely Islamophobic content, and also produced some pretty concerning talking points about the treatment of Uyghur Muslims in China."
dr tech

How to Detect OpenAI's ChatGPT Output | by Sung Kim | Geek Culture | Dec, 2022 | Medium - 0 views

  •  
    "The tool has determined that there is a 99.61% probability this text was generated using OpenAI GPT. Please note that this tool like everything in AI, has a high probability of detecting GPT output, but not 100% as attributed by George E. P. Box "All models are wrong, but some are useful"."
« First ‹ Previous 101 - 120 of 152 Next › Last »
Showing 20 items per page