See How Real AI-Generated Images Have Become - The New York Times - 0 views
-
The rapid advent of artificial intelligence has set off alarms that the technology used to trick people is advancing far faster than the technology that can identify the tricks. Tech companies, researchers, photo agencies and news organizations are scrambling to catch up, trying to establish standards for content provenance and ownership.
-
Last month, some people fell for images showing Pope Francis donning a puffy Balenciaga jacket and an earthquake devastating the Pacific Northwest, even though neither of those events had occurred. The images had been created using Midjourney, a popular image generator.
- ...16 more annotations...
-
Authoritarian governments have created seemingly realistic news broadcasters to advance their political goals
-
Getty’s lawsuit reflects concerns raised by many individual artists — that A.I. companies are becoming a competitive threat by copying content they do not have permission to use.
-
“The tools are going to get better, they’re going to get cheaper, and there will come a day when nothing you see on the internet can be believed,” said Wasim Khaled, chief executive of Blackbird.ai, a company that helps clients fight disinformation.
-
Artificial intelligence allows virtually anyone to create complex artworks, like those now on exhibit at the Gagosian art gallery in New York, or lifelike images that blur the line between what is real and what is fiction. Plug in a text description, and the technology can produce a related image — no special skills required.
-
Midjourney’s images, he said, were able to pass muster in facial-recognition programs that Bellingcat uses to verify identities, typically of Russians who have committed crimes or other abuses. It’s not hard to imagine governments or other nefarious actors manufacturing images to harass or discredit their enemies.
-
In February, Getty accused Stability AI of illegally copying more than 12 million Getty photos, along with captions and metadata, to trAIn the software behind its Stable Diffusion tool. In its lawsuit, Getty argued that Stable Diffusion diluted the value of the Getty watermark by incorporating it into images that ranged “from the bizarre to the grotesque.”
-
Experts fear the technology could hasten an erosion of trust in media, in government and in society. If any image can be manufactured — and manipulated — how can we believe anything we see?
-
Trademark violations have also become a concern: Artificially generated images have replicated NBC’s peacock logo, though with unintelligible letters, and shown Coca-Cola’s familiar curvy logo with extra O’s looped into the name.
-
The threat to photographers is fast outpacing the development of legal protections, said Mickey H. Osterreicher, general counsel for the National Press Photographers Association
-
Social media users are ignoring labels that clearly identify images as artificially generated, choosing to believe they are real photographs, he said.
-
The video explained that the deepfake had been created, with Ms. Schick’s consent, by the Dutch company Revel.ai and Truepic, a California company that is exploring broader digital content verification
-
The companies described their video, which features a stamp identifying it as computer-generated, as the “first digitally transparent deepfake.” The data is cryptographically sealed into the file; tampering with the image breaks the digital signature and prevents the credentials from appearing when using trusted software.
-
The companies hope the badge, which will come with a fee for commercial clients, will be adopted by other content creators to help create a standard of trust involving A.I. images.
-
“The scale of this problem is going to accelerate so rapidly that it’s going to drive consumer education very quickly,” said Jeff McGregor, chief executive of Truepic
-
Adobe unveiled its own image-generating product, Firefly, which will be trained using only images that were licensed or from its own stock or no longer under copyright. Dana Rao, the company’s chief trust officer, said on its website that the tool would automatically add content credentials — “like a nutrition label for imaging” — that identified how an image had been made. Adobe said it also planned to compensate contributors.