Skip to main content

Home/ Digit_al Society/ Group items matching "models" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
dr tech

The world's biggest AI models aren't very transparent, Stanford study says - The Verge - 0 views

  •  
    "No prominent developer of AI foundation models - a list including companies like OpenAI and Meta - is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence). Today, Stanford HAI released its Foundation Model Transparency Index, which tracked whether creators of the 10 most popular AI models disclose information about their work and how people use their systems. Among the models it tested, Meta's Llama 2 scored the highest, followed by BloomZ and then OpenAI's GPT-4. But none of them, it turned out, got particularly high marks."
dr tech

Millions of Workers Are Training AI Models for Pennies | WIRED - 0 views

  •  
    "Some experts see platforms like Appen as a new form of data colonialism, says Saiph Savage, director of the Civic AI lab at Northeastern University. "Workers in Latin America are labeling images, and those labeled images are going to feed into AI that will be used in the Global North," she says. "While it might be creating new types of jobs, it's not completely clear how fulfilling these types of jobs are for the workers in the region." Due to the ever moving goal posts of AI, workers are in a constant race against the technology, says Schmidt. "One workforce is trained to three-dimensionally place bounding boxes around cars very precisely, and suddenly it's about figuring out if a large language model has given an appropriate answer," he says, regarding the industry's shift from self-driving cars to chatbots. Thus, niche labeling skills have a "very short half-life." "From the clients' perspective, the invisibility of the workers in microtasking is not a bug but a feature," says Schmidt. Economically, because the tasks are so small, it's more feasible to deal with contractors as a crowd instead of individuals. This creates an industry of irregular labor with no face-to-face resolution for disputes if, say, a client deems their answers inaccurate or wages are withheld. The workers WIRED spoke to say it's not low fees but the way platforms pay them that's the key issue. "I don't like the uncertainty of not knowing when an assignment will come out, as it forces us to be near the computer all day long," says Fuentes, who would like to see additional compensation for time spent waiting in front of her screen. Mutmain, 18, from Pakistan, who asked not to use his surname, echoes this. He says he joined Appen at 15, using a family member's ID, and works from 8 am to 6 pm, and another shift from 2 am to 6 am. "I need to stick to these platforms at all times, so that I don't lose work," he says, but he struggles to earn more than $50
dr tech

Big Tech Struggles to Turn AI Hype Into Profits - WSJ - 0 views

  •  
    "Generative artificial-intelligence tools are unproven and expensive to operate, requiring muscular servers with expensive chips that consume lots of power. Microsoft MSFT -0.43%decrease; red down pointing triangle , Google, Adobe and other tech companies investing in AI are experimenting with an array of tactics to make, market and charge for it. Microsoft has lost money on one of its first generative AI products, said a person with knowledge of the figures. It and Google are now launching AI-backed upgrades to their software with higher price tags. Zoom Video Communications ZM 1.79%increase; green up pointing triangle has tried to mitigate costs by sometimes using a simpler AI it developed in-house. Adobe and others are putting caps on monthly usage and charging based on consumption. "A lot of the customers I've talked to are unhappy about the cost that they are seeing for running some of these models," said Adam Selipsky, the chief executive of Amazon.com's cloud division, Amazon Web Services, speaking of the industry broadly. "
dr tech

The Folly of DALL-E: How 4chan is Abusing Bing's New Image Model - bellingcat - 0 views

  •  
    "Racists on the notorious troll site 4chan are using a powerful new and free AI-powered image generator service offered by Microsoft to create antisemitic propaganda, according to posts reviewed by Bellingcat. Users of 4chan, which has frequently hosted hate speech and played home to posts by mass shooters, tasked Bing Image Creator to create photo-realistic antisemitic caricatures of Jews and, in recent days, shared images created by the platform depicting Orthodox men preparing to eat a baby, carrying migrants across the US border (the latter a nod to the racist Great Replacement conspiracy theory), and committing the 9/11 attacks."
dr tech

Google will let publishers hide their content from its insatiable AI - 0 views

  •  
    "Google has announced a new control in its robots.txt indexing file that would let publishers decide whether their content will "help improve Bard and Vertex AI generative APIs, including future generations of models that power those products." The control is a crawler called Google-Extended, and publishers can add it to the file in their site's documentation to tell Google not to use it for those two APIs. In its announcement, the company's vice president of "Trust" Danielle Romain said it's "heard from web publishers that they want greater choice and control over how their content is used for emerging generative AI use cases.""
dr tech

Social media bosses must invest in guarding global elections against incitement of hate and violence | Global Witness - 0 views

  •  
    "In the context of ongoing corruption crises, rising anti-migrant rhetoric and anti-human-rights movements, and threats to press freedom, the role of social media companies may seem like a lesser priority, but in fact, it is a crucial part of the picture. People's rights and freedoms offline are being jeopardised by online platforms' current business model, where profit is made from stoking up anger and fear. At the South African human rights organisation where I work, the Legal Resources Centre, we are seeing an escalation of xenophobic violence that is often incited on social media. A recent joint investigation we conducted with international NGO Global Witness showed that Facebook, TikTok and YouTube all failed to enforce their own policies on hate speech and incitement to violence by approving adverts that included calls on the police in South Africa to kill foreigners, referred to non-South African nationals as a "disease", as well as incited violence through "force" against migrants."
dr tech

Google says AI systems should be able to mine publishers' work unless companies opt out | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    "The company has called for Australian policymakers to promote "copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models in Australia on a broad and diverse range of data, while supporting workable opt-outs for entities that prefer their data not to be trained in using AI systems". The call for a fair use exception for AI systems is a view the company has expressed to the Australian government in the past, but the notion of an opt-out option for publishers is a new argument from Google."
dr tech

'Critical ignoring' is critical thinking for the digital age | World Economic Forum - 0 views

  •  
    "The platforms that control search were conceived in sin. Their business model auctions off our most precious and limited cognitive resource: attention. These platforms work overtime to hijack our attention by purveying information that arouses curiosity, outrage, or anger. The more our eyeballs remain glued to the screen, the more ads they can show us, and the greater profits accrue to their shareholders."
dr tech

Google Translate uses A.I. for world's oldest language | Fortune - 0 views

  •  
    "describing how they had created an A.I. model to instantly translate the ancient glyphs. The team, led by a Google software engineer and an Assyriologist from Ariel University, trained the model on existing cuneiform translations using the same technology that powers Google Translate."
dr tech

Authors file a lawsuit against OpenAI for unlawfully 'ingesting' their books | Books | The Guardian - 0 views

  •  
    "Two authors have filed a lawsuit against OpenAI, the company behind the artificial intelligence tool ChatGPT, claiming that the organisation breached copyright law by "training" its model on novels without the permission of authors. Mona Awad, whose books include Bunny and 13 Ways of Looking at a Fat Girl, and Paul Tremblay, author of The Cabin at the End of the World, filed the class action complaint to a San Francisco federal court last week."
dr tech

New Tool Reveals How AI Makes Decisions - Scientific American - 0 views

  •  
    "Most AI programs function like a "black box." "We know exactly what a model does but not why it has now specifically recognized that a picture shows a cat," said computer scientist Kristian Kersting of the Technical University of Darmstadt in Germany to the German-language newspaper Handelsblatt. That dilemma prompted Kersting-along with computer scientists Patrick Schramowski of the Technical University of Darmstadt and Björn Deiseroth, Mayukh Deb and Samuel Weinbach, all at the Heidelberg, Germany-based AI company Aleph Alpha-to introduce an algorithm called AtMan earlier this year. AtMan allows large AI systems such as ChatGPT, Dall-E and Midjourney to finally explain their outputs."
dr tech

The AI feedback loop: Researchers warn of 'model collapse' as AI trains on AI-generated content | VentureBeat - 0 views

  •  
    "Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content? A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: "We find that use of model-generated content in training causes irreversible defects in the resulting models.""
dr tech

Large, creative AI models will transform lives and labour markets | The Economist - 0 views

  •  
    "Getty points to images produced by Stable Diffusion which contain its copyright watermark, suggesting that Stable Diffusion has ingested and is reproducing copyrighted material without permission (Stability AI has not yet commented publicly on the lawsuit). The same level of evidence is harder to come by when examining ChatGPT's text output, but there is no doubt that it has been trained on copyrighted material. OpenAI will be hoping that its text generation is covered by "fair use", a provision in copyright law that allows limited use of copyrighted material for "transformative" purposes. That idea will probably one day be tested in court."
dr tech

Elections in UK and US at risk from AI-driven disinformation, say experts | Politics and technology | The Guardian - 0 views

  •  
    "Next year's elections in Britain and the US could be marked by a wave of AI-powered disinformation, experts have warned, as generated images, text and deepfake videos go viral at the behest of swarms of AI-powered propaganda bots. Sam Altman, CEO of the ChatGPT creator, OpenAI, told a congressional hearing in Washington this week that the models behind the latest generation of AI technology could manipulate users."
dr tech

OpenAI CEO calls for laws to mitigate 'risks of increasingly powerful' AI | ChatGPT | The Guardian - 0 views

  •  
    "The CEO of OpenAI, the company responsible for creating artificial intelligence chatbot ChatGPT and image generator Dall-E 2, said "regulation of AI is essential" as he testified in his first appearance in front of the US Congress. The apocalypse isn't coming. We must resist cynicism and fear about AI Stephen Marche Stephen Marche Read more Speaking to the Senate judiciary committee on Tuesday, Sam Altman said he supported regulatory guardrails for the technology that would enable the benefits of artificial intelligence while minimizing the harms. "We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," Altman said in his prepared remarks."
dr tech

A Brain Scanner Combined with an AI Language Model Can Provide a Glimpse into Your Thoughts - Scientific American - 0 views

  •  
    "Now researchers have taken a step forward by combining fMRI's ability to monitor neural activity with the predictive power of artificial intelligence language models. The hybrid technology has resulted in a decoder that can reproduce, with a surprising level of accuracy, the stories that a person listened to or imagined telling in the scanner. The decoder could even guess the story behind a short film that someone watched in the scanner, though with less accuracy."
dr tech

Could AI save the Amazon rainforest? | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    "The model takes a two-pronged approach. First, it focuses on trends present in the region, looking at geostatistics and historical data from Prodes, the annual government monitoring system for deforestation in the Amazon. Understanding what has happened can help make predictions more precise. When already deforested areas are recent, this indicates gangs are operating in the area, so there's a higher risk that nearby forest will soon be wiped out. Second, it looks at variables that put the brakes on deforestation - land protected by Indigenous and quilombola (descendent of rebel slaves) communities, and areas with bodies of water, or other terrain that doesn't lend itself to agricultural expansion, for instance - and variables that make deforestation more likely, including higher population density, the presence of settlements and rural properties, and higher density of road infrastructure, both legal and illegal."
dr tech

'I didn't give permission': Do AI's backers care about data law breaches? | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    "Wooldridge says copyright is a "coming storm" for AI companies. LLMs are likely to have accessed copyrighted material, such as news articles. Indeed the GPT-4-assisted chatbot attached to Microsoft's Bing search engine cites news sites in its answers. "I didn't give explicit permission for my works to be used as training data, but they almost certainly were, and now they contribute to what these models know," he says. "Many artists are gravely concerned that their livelihoods are at risk from generative AI. Expect to see legal battles," he adds."
dr tech

AI Is Coming for Voice Actors. Artists Everywhere Should Take Note | The Walrus - 0 views

  •  
    "All of this probably means I should be worried about recent trends in artificial intelligence, which is encroaching on voice-over work in a manner similar to how it threatens the labour of visual artists and writers-both financially and ethically. The creep is only just beginning, with dubbing companies training software to replace human actors and tech companies introducing digital audiobook narration. But AI poses a threat to work opportunities across the board by giving producers the tools to recreate their favourite voices on demand, without the performer's knowledge or consent and without additional compensation. It's clear that AI will transform the arts sector, and the voice-over industry offers an early, unsettling model for what this future may look like."
‹ Previous 21 - 40 of 123 Next › Last »
Showing 20 items per page