Skip to main content

Home/ Digit_al Society/ Group items tagged openAI

Rss Feed Group items tagged

dr tech

OpenAI debates when to release its AI-generated image detector | TechCrunch - 0 views

  •  
    "OpenAI has "discussed and debated quite extensively" when to release a tool that can determine whether an image was made with DALL-E 3, OpenAI's generative AI art model, or not. But the startup isn't close to making a decision anytime soon. That's according to Sandhini Agarwal, an OpenAI researcher who focuses on safety and policy, who spoke with TechCrunch in a phone interview this week. She said that, while the classifier tool's accuracy is "really good" - at least by her estimation - it hasn't met OpenAI's threshold for quality."
dr tech

OpenAI bans bot impersonating US presidential candidate Dean Phillips | OpenAI | The Gu... - 0 views

  •  
    "OpenAI has removed the account of the developer behind an artificial intelligence-powered bot impersonating the US presidential candidate Dean Phillips, saying it violated company policy. Phillips, who is challenging Joe Biden for the Democratic party candidacy, was impersonated by a ChatGPT-powered bot on the dean.bot site. The bot was backed by Silicon Valley entrepreneurs Matt Krisiloff and Jed Somers, who have started a Super Pac - a body that funds and supports political candidates - named We Deserve Better, supporting Phillips. San Francisco-based OpenAI said it had removed a developer account that violated its policies on political campaigning and impersonation. "We recently removed a developer account that was knowingly violating our API usage policies which disallow political campaigning, or impersonating an individual without consent," said the company."
dr tech

Authors file a lawsuit against OpenAI for unlawfully 'ingesting' their books | Books | ... - 0 views

  •  
    "Two authors have filed a lawsuit against OpenAI, the company behind the artificial intelligence tool ChatGPT, claiming that the organisation breached copyright law by "training" its model on novels without the permission of authors. Mona Awad, whose books include Bunny and 13 Ways of Looking at a Fat Girl, and Paul Tremblay, author of The Cabin at the End of the World, filed the class action complaint to a San Francisco federal court last week."
dr tech

Elon Musk and Sam Altman's OpenAI and Pennsylvania State University made a tool to prot... - 0 views

  •  
    "To thwart such hackers, Elon Musk's OpenAI and Pennsylvania State University released a new tool this week called "cleverhans," that lets artificial intelligence researchers test how vulnerable their AI is to adversarial examples, or purposefully malicious data meant to confuse the algorithms. Once the vulnerability has been found, a defense to the attack can automatically be applied."
dr tech

AI is making literary leaps - now we need the rules to catch up | Opinion | The Guardian - 0 views

  •  
    "If true, this would be a big deal. But, said OpenAI, "due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.""
dr tech

OpenAI CEO calls for laws to mitigate 'risks of increasingly powerful' AI | ChatGPT | T... - 0 views

  •  
    "The CEO of OpenAI, the company responsible for creating artificial intelligence chatbot ChatGPT and image generator Dall-E 2, said "regulation of AI is essential" as he testified in his first appearance in front of the US Congress. The apocalypse isn't coming. We must resist cynicism and fear about AI Stephen Marche Stephen Marche Read more Speaking to the Senate judiciary committee on Tuesday, Sam Altman said he supported regulatory guardrails for the technology that would enable the benefits of artificial intelligence while minimizing the harms. "We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," Altman said in his prepared remarks."
dr tech

OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time - 0 views

  •  
    "Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic"
dr tech

How to Detect OpenAI's ChatGPT Output | by Sung Kim | Geek Culture | Dec, 2022 | Medium - 0 views

  •  
    "The tool has determined that there is a 99.61% probability this text was generated using OpenAI GPT. Please note that this tool like everything in AI, has a high probability of detecting GPT output, but not 100% as attributed by George E. P. Box "All models are wrong, but some are useful"."
dr tech

George RR Martin and John Grisham among group of authors suing OpenAI | Books | The Gua... - 0 views

  •  
    "In papers filed on Tuesday in federal court in New York, the authors alleged "flagrant and harmful infringements of plaintiffs' registered copyrights" and called the ChatGPT program a "massive commercial enterprise" that is reliant upon "systematic theft on a mass scale"."
dr tech

What is AI chatbot phenomenon ChatGPT and could it replace humans? | Artificial intelli... - 0 views

  •  
    "ChatGPT can also give entirely wrong answers and present misinformation as fact, writing "plausible-sounding but incorrect or nonsensical answers", the company concedes. OpenAI says that fixing this issue is difficult because there is no source of truth in the data they use to train the model and supervised training can also be misleading "because the ideal answer depends on what the model knows, rather than what the human demonstrator knows"."
dr tech

Anti-Cheating Service Turnitin Says It Can Detect Use of ChatGPT - 0 views

  •  
    "However, executives at anti-cheating software maker Turnitin say they've cracked the code. The company, which works with thousands of universities and high schools to help teachers identify plagiarism, said it plans to roll out a service this year that can accurately tell whether ChatGPT has done a student's assignment for them. "
dr tech

ChatGPT listed as author on research papers: many scientists disapprove - 0 views

  •  
    "Journal editors, researchers and publishers are now debating the place of such AI tools in the published literature, and whether it's appropriate to cite the bot as an author. Publishers are racing to create policies for the chatbot, which was released as a free-to-use tool in November by tech company OpenAI in San Francisco, California."
dr tech

ChatGPT maker OpenAI releases 'not fully reliable' tool to detect AI generated content ... - 0 views

  •  
    "Open AI researchers said that while it was "impossible to reliably detect all AI-written text", good classifiers could pick up signs that text was written by AI. The tool could be useful in cases where AI was used for "academic dishonesty" and when AI chatbots were positioned as humans, they said."
dr tech

Large, creative AI models will transform lives and labour markets | The Economist - 0 views

  •  
    "Getty points to images produced by Stable Diffusion which contain its copyright watermark, suggesting that Stable Diffusion has ingested and is reproducing copyrighted material without permission (Stability AI has not yet commented publicly on the lawsuit). The same level of evidence is harder to come by when examining ChatGPT's text output, but there is no doubt that it has been trained on copyrighted material. OpenAI will be hoping that its text generation is covered by "fair use", a provision in copyright law that allows limited use of copyrighted material for "transformative" purposes. That idea will probably one day be tested in court."
dr tech

Elections in UK and US at risk from AI-driven disinformation, say experts | Politics an... - 0 views

  •  
    "Next year's elections in Britain and the US could be marked by a wave of AI-powered disinformation, experts have warned, as generated images, text and deepfake videos go viral at the behest of swarms of AI-powered propaganda bots. Sam Altman, CEO of the ChatGPT creator, OpenAI, told a congressional hearing in Washington this week that the models behind the latest generation of AI technology could manipulate users."
dr tech

The world's biggest AI models aren't very transparent, Stanford study says - The Verge - 0 views

  •  
    "No prominent developer of AI foundation models - a list including companies like OpenAI and Meta - is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence). Today, Stanford HAI released its Foundation Model Transparency Index, which tracked whether creators of the 10 most popular AI models disclose information about their work and how people use their systems. Among the models it tested, Meta's Llama 2 scored the highest, followed by BloomZ and then OpenAI's GPT-4. But none of them, it turned out, got particularly high marks."
dr tech

Man beats machine at Go in human victory over AI | Ars Technica - 0 views

  •  
    "Kellin Pelrine, an American player who is one level below the top amateur ranking, beat the machine by taking advantage of a previously unknown flaw that had been identified by another computer. But the head-to-head confrontation in which he won 14 of 15 games was undertaken without direct computer support. The triumph, which has not previously been reported, highlighted a weakness in the best Go computer programs that is shared by most of today's widely used AI systems, including the ChatGPT chatbot created by San Francisco-based OpenAI. The tactics that put a human back on top on the Go board were suggested by a computer program that had probed the AI systems looking for weaknesses. The suggested plan was then ruthlessly delivered by Pelrine."
dr tech

Misplaced fears of an 'evil' ChatGPT obscure the real harm being done | John Naughton |... - 0 views

  •  
    "Given that, isn't it interesting that the one thing nobody talks about at the moment is the environmental impact of the vast amount of computing needed to train and operate LLMs? A world that is dependent on them might be good for business but it would certainly be bad for the planet. Maybe that's what Sam Altman, the CEO of OpenAI, the outfit that created ChatGPT, had in mind when he observed that "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies"."
dr tech

OpenAI CEO Sam Altman warns that other A.I. developers working on ChatGPT-like tools wo... - 0 views

  •  
    "In early December, Musk called ChatGPT "scary good" and warned, "We are not far from dangerously strong AI." But Altman has been warning the public just as much, if not more, even as he presses ahead with OpenAI's work. Last month, he worried about "how people of the future will view us" in a series of tweets. "We also need enough time for our institutions to figure out what to do," he wrote. "Regulation will be critical and will take time to figure out…having time to understand what's happening, how people want to use these tools, and how society can co-evolve is critical.""
penguin230

'Of course it's disturbing': will AI change Hollywood forever? | Film industry | The Gu... - 0 views

  •  
    "hat will AI (artificial intelligence) do to Hollywood? Who better to answer that question than ChatGPT, a thrilling but scary chatbot developed by OpenAI. "
1 - 20 of 25 Next ›
Showing 20 items per page