Skip to main content

Home/ History Readings/ Group items tagged deepfake

Rss Feed Group items tagged

Javier E

AI could change the 2024 elections. We need ground rules. - The Washington Post - 0 views

  • New York Mayor Eric Adams doesn’t speak Spanish. But it sure sounds like he does.He’s been using artificial intelligence software to send prerecorded calls about city events to residents in Spanish, Mandarin Chinese, Urdu and Yiddish. The voice in the messages mimics the mayor but was generated with AI software from a company called ElevenLabs.
  • Experts have warned for years that AI will change our democracy by distorting reality. That future is already here. AI is being used to fabricate voices, fundraising emails and “deepfake” images of events that never occurred.
  • I’m writing this to urge elected officials, candidates and their supporters to pledge not to use AI to deceive voters. I’m not suggesting a ban, but rather calling for politicians to commit to some common values while our democracy adjusts to a world with AI.
  • ...20 more annotations...
  • If we don’t draw some lines now, legions of citizens could be manipulated, disenfranchised or lose faith in the whole system — opening doors to foreign adversaries who want to do the same. AI might break us in 2024.
  • “The ability of AI to interfere with our elections, to spread misinformation that’s extremely believable is one of the things that’s preoccupying us,” Schumer said, after watching me so easily create a deepfake of him. “Lots of people in the Congress are examining this.”
  • Of course, fibbing politicians are nothing new, but examples keep multiplying of how AI supercharges misinformation in ways we haven’t seen before. Two examples: The presidential campaign of Florida Gov. Ron DeSantis (R) shared an AI-generated image of former president Donald Trump embracing Anthony S. Fauci. That hug never happened. In Chicago’s mayoral primary, someone used AI to clone the voice of candidate Paul Vallas in a fake news report, making it look like he approved of police brutality.
  • But what will happen when a shocking image or audio clip goes viral in a battleground state shortly before an election? What kind of chaos will ensue when someone uses a bot to send out individually tailored lies to millions of different voters?
  • A wide 85 percent of U.S. citizens said they were “very” or “somewhat” concerned about the spread of misleading AI video and audio, in an August survey by YouGov. And 78 percent were concerned about AI contributing to the spread of political propaganda.
  • We can’t put the genie back in the bottle. AI is already embedded in tech tool campaigns that all of us use every day. AI creates our Facebook feeds and picks what ads we see. AI built into our phone cameras brightens faces and smooths skin.
  • What’s more, there are many political uses for AI that are unobjectionable, and even empowering for candidates with fewer resources. Politicians can use AI to manage the grunt work of sorting through databases and responding to constituents. Republican presidential candidate Asa Hutchinson has an AI chatbot trained to answer questions like him. (I’m not sure politician bots are very helpful, but fine, give it a try.)
  • Clarke’s solution, included in a bill she introduced on political ads: Candidates should disclose when they use AI to create communications. You know the “I approve this message” notice? Now add, “I used AI to make this message.”
  • But labels aren’t enough. If AI disclosures become commonplace, we may become blind to them, like so much other fine print.
  • The bigger ask: We want candidates and their supporting parties and committees not to use AI to deceive us.
  • So what’s the difference between a dangerous deepfake and an AI facetune that makes an octogenarian candidate look a little less octogenarian?
  • “The core definition is showing a candidate doing or saying something they didn’t do or say,”
  • Sure, give Biden or Trump a facetune, or even show them shaking hands with Abraham Lincoln. But don’t use AI to show your competitor hugging an enemy or fake their voice commenting on current issues.
  • The pledge also includes not using AI to suppress voting, such as using an authoritative voice or image to tell people a polling place has been closed. That is already illegal in many states, but it’s still concerning how believable AI might make these efforts seem.
  • Don’t deepfake yourself. Making yourself or your favorite candidate appear more knowledgeable, experienced or culturally capable is also a form of deception.
  • (Pressed on the ethics of his use of AI, Adams just proved my point that we desperately need some ground rules. “These are part of the broader conversations that the philosophical people will have to sit down and figure out, ‘Is this ethically right or wrong?’ I’ve got one thing: I’ve got to run the city,” he said.)
  • The golden rule in my pledge — don’t use AI to be materially deceptive — is similar to the one in an AI regulation proposed by a bipartisan group of lawmakers
  • Such proposals have faced resistance in Washington on First Amendment grounds. The free speech of politicians is important. It’s not against the law for politicians to lie, whether they’re using AI or not. An effort to get the Federal Election Commission to count AI deepfakes as “fraudulent misrepresentation” under its existing authority has faced similar pushback.
  • But a pledge like the one I outline here isn’t a law restraining speech. It’s asking politicians to take a principled stand on their own use of AI
  • Schumer said he thinks my pledge is just a start of what’s needed. “Maybe most candidates will make that pledge. But the ones that won’t will drive us to a lower common denominator, and that’s true throughout AI,” he said. “If we don’t have government-imposed guardrails, the lowest common denominator will prevail.”
Javier E

Is Anything Still True? On the Internet, No One Knows Anymore - WSJ - 0 views

  • Creating and disseminating convincing propaganda used to require the resources of a state. Now all it takes is a smartphone.
  • Generative artificial intelligence is now capable of creating fake pictures, clones of our voices, and even videos depicting and distorting world events. The result: From our personal circles to the political circuses, everyone must now question whether what they see and hear is true.
  • exposure to AI-generated fakes can make us question the authenticity of everything we see. Real images and real recordings can be dismissed as fake. 
  • ...20 more annotations...
  • “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says David Rand, a professor at MIT Sloan who studies the creation, spread and impact of misinformation.
  • The signs that an image is AI-generated are easy to miss for a user simply scrolling past, who has an instant to decide whether to like or boost a post on social media. And as generative AI continues to improve, it’s likely that such signs will be harder to spot in the future.
  • The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe, adds DiResta, leading to what she calls “bespoke realities.”
  • Examples of misleading content created by generative AI are not hard to come by, especially on social media
  • This problem, which has grown more acute in the age of generative AI, is known as the “liar’s dividend,
  • “What our work suggests is that most regular people do not want to share false things—the problem is they are not paying attention,”
  • People’s attention is already limited, and the way social media works—encouraging us to gorge on content, while quickly deciding whether or not to share it—leaves us precious little capacity to determine whether or not something is true
  • are now using its existence as a pretext to dismiss accurate information
  • in the course of a lawsuit over the death of a man using Tesla’s “full self-driving” system, Elon Musk’s lawyers responded to video evidence of Musk making claims about this software by suggesting that the proliferation of “deepfakes” of Musk was grounds to dismiss such evidence. They advanced that argument even though the clip of Musk was verifiably real
  • If the crisis of authenticity were limited to social media, we might be able to take solace in communication with those closest to us. But even those interactions are now potentially rife with AI-generated fakes.
  • what sounds like a call from a grandchild requesting bail money may be scammers who have scraped recordings of the grandchild’s voice from social media to dupe a grandparent into sending money.
  • companies like Alphabet, the parent company of Google, are trying to spin the altering of personal images as a good thing. 
  • With its latest Pixel phone, the company unveiled a suite of new and upgraded tools that can automatically replace a person’s face in one image with their face from another, or quickly remove someone from a photo entirely.
  • Joseph Stalin, who was fond of erasing people he didn’t like from official photos, would have loved this technology.
  • In Google’s defense, it is adding a record of whether an image was altered to data attached to it. But such metadata is only accessible in the original photo and some copies, and is easy enough to strip out.
  • The rapid adoption of many different AI tools means that we are now forced to question everything that we are exposed to in any medium, from our immediate communities to the geopolitical, said Hany Farid, a professor at the University of California, Berkeley who
  • To put our current moment in historical context, he notes that the PC revolution made it easy to store and replicate information, the internet made it easy to publish it, the mobile revolution made it easier than ever to access and spread, and the rise of AI has made creating misinformation a cinch. And each revolution arrived faster than the one before it.
  • Not everyone agrees that arming the public with easy access to AI will exacerbate our current difficulties with misinformation. The primary argument of such experts is that there is already vastly more misinformation on the internet than a person can consume, so throwing more into the mix won’t make things worse.
  • it’s not exactly reassuring, especially given that trust in institutions is already at one of the lowest points in the past 70 years, according to the nonpartisan Pew Research Center, and polarization—a measure of how much we distrust one another—is at a high point.
  • “What happens when we have eroded trust in media, government, and experts?” says Farid. “If you don’t trust me and I don’t trust you, how do we respond to pandemics, or climate change, or have fair and open elections? This is how authoritarianism arises—when you erode trust in institutions.”
Javier E

AI in Politics Is So Much Bigger Than Deepfakes - The Atlantic - 0 views

  • “Deepfakes have been the next big problem coming in the next six months for about four years now,” Joshua Tucker, a co-director of the NYU Center for Social Media and Politics, told m
  • Academic research suggests that disinformation may constitute a relatively small proportion of the average American’s news intake, that it’s concentrated among a small minority of people, and that, given how polarized the country already is, it probably doesn’t change many minds.
  • If the first-order worry is that people will get duped, the second-order worry is that the fear of deepfakes will lead people to distrust everything.
  • ...12 more annotations...
  • Researchers call this effect “the liar’s dividend,” and politicians have already tried to cast off unfavorable clips as AI-generated: Last month, Donald Trump falsely claimed that an attack ad had used AI to make him look bad.
  • “Deepfake” could become the “fake news” of 2024, an infrequent but genuine phenomenon that gets co-opted as a means of discrediting the truth
  • Steve Bannon’s infamous assertion that the way to discredit the media is to “flood the zone with shit.”
  • AI is less likely to create new dynamics than to amplify existing ones. Presidential campaigns, with their bottomless coffers and sprawling staff, have long had the ability to target specific groups of voters with tailored messaging
  • They might have thousands of data points about who you are, obtained by gathering information from public records, social-media profiles, and commercial brokers
  • “It is now so cheap to engage in this mass personalization,” Laura Edelson, a computer-science professor at Northeastern University who studies misinformation and disinformation, told me. “It’s going to make this content easier to create, cheaper to create, and put more communities within the reach of it.”
  • That sheer ease could overwhelm democracies’ already-vulnerable election infrastructure. Local- and state-election workers have been under attack since 2020, and AI could make things worse.
  • Those officials have also expressed the worry, he said, that generative AI will turbocharge the harassment they face, by making the act of writing and sending hate mail virtually effortless. (The consequences may be particularly severe for women.)
  • past attacks—most notably the Russian hack of John Podesta’s email, in 2016—have wrought utter havoc. But now pretty much anyone—whatever language they speak and whatever their writing ability—can send out hundreds of phishing emails in fluent English prose. “The cybersecurity implications of AI for elections and electoral integrity probably aren’t getting nearly the focus that they should,”
  • Just last week, AI-generated audio surfaced of one Harlem politician criticizing another. New York City has perhaps the most robust local-news ecosystem of any city in America, but elsewhere, in communities without the media scrutiny and fact-checking apparatuses that exist at the national level, audio like this could cause greater chaos.
  • In countries that speak languages with less online text for LLMs to gobble up, AI tools may be less sophisticated. But those same countries are likely the ones where tech platforms will pay the least attention to the spread of deepfakes and other disinformation, Edelson told me. India, Russia, the U.S., the EU—this is where platforms will focus. “Everything else”—Namibia, Uzbekistan, Uruguay—“is going to be an afterthought,”
  • Most of us tend to fret about the potential fake video that deceives half of the nation, not about the flood of FOIA requests already burying election officials. If there is a cost to that way of thinking, the world may pay it this year at the polls.
Javier E

Deepfakes are biggest AI concern, says Microsoft president | Artificial intelligence (A... - 0 views

  • Brad Smith, the president of Microsoft, has said that his biggest concern around artificial intelligence was deepfakes, realistic looking but false content.
  • “We’re going have to address the issues around deepfakes. We’re going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians,”
  • “We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI.”
  • ...4 more annotations...
  • “We will need a new generation of export controls, at least the evolution of the export controls we have, to ensure that these models are not stolen or not used in ways that would violate the country’s export control requirements,”
  • Smith also argued in the speech, and in a blogpost issued on Thursday, that people needed to be held accountable for any problems caused by AI and he urged lawmakers to ensure that safety brakes be put on AI used to control the electric grid, water supply and other critical infrastructure so that humans remain in control.
  • He urged use of a “Know Your Customer”-style system for developers of powerful AI models to keep tabs on how their technology is used and to inform the public of what content AI is creating so they can identify faked videos.
  • Some proposals being considered on Capitol Hill would focus on AI that may put people’s lives or livelihoods at risk, like in medicine and finance. Others are pushing for rules to ensure AI is not used to discriminate or violate civil rights.
Javier E

Inside the porn industry, AI looms large - The Washington Post - 0 views

  • Since the first AVN “expo” in 1998, adult entertainment has been overtaken by two business models: Pornhub, a free site supported by ads, and OnlyFans, a subscription platform where individual actors control their businesses and their fate.
  • Now, a new shift is on the horizon: Artificial intelligence models that spin up photorealistic images and videos that put viewers in the director’s chair, letting them create whatever porn they like.
  • Some site owners think it’s a privilege people will pay for, and they are racing to build custom AI models that — unlike the sanitized content on OpenAI’s video engine Sora — draw on a vast repository of porn images and videos.
  • ...26 more annotations...
  • he trickiest question may be how to prevent abuse. AI generators have technological boundaries, but not morals, and it’s relatively easy for users to trick them into creating content that depicts violence, rape, sex with children or a celebrity — or even a crush from work who never consented to appear
  • In some cases, the engines themselves are trained on porn images whose subjects didn’t explicitly agree to the new use. Currently, no federal laws protect the victims of nonconsensual deepfakes.
  • Adult entertainment is a giant industry accounting for a substantial chunk of all internet traffic: Major porn sites get more monthly visitors and page views than Amazon, Netflix, TikTok or Zoom
  • The industry is a habitual early adopter of new technology, from VHS to DVD to dot com. In the mid-2000s, porn companies set up massive sites where users upload and watch free videos, and ad sales foot the bills.
  • At last year’s AVN conference, Steven Jones said his peers looked at him “like he was crazy” when he talked about AI opportunities: “Nobody was interested.” This year, Jones said, he’s been “the belle of the ball.”
  • He called up his old business partner, and the two immediately spent about $550,000 securing the web domains for porn dot ai, deepfake dot com and deepfakes dot com, Jones said. “Lightspeed” was back.
  • One major model, Stable Diffusion, shares its code publicly, and some technologists have figured out how to edit the code to allow for sexual images
  • What keeps Jones up at night is people trying to use his company’s tools to generate images of abuse, he said. The models have some technological guardrails that make it difficult for users to render children, celebrities or acts of violence. But people are constantly looking for workarounds.
  • So with help from an angel investor he will not name, Jones hired five employees and a handful of offshore contractors and started building an image engine trained on bundles of freely available pornographic images, as well as thousands of nude photos from Jones’s own collection
  • Users create what Jones calls a “dream girl,” prompting the AI with descriptions of the character’s appearance, pose and setting. The nudes don’t portray real people, he said. Rather, the goal is to re-create a fantasy from the user’s imagination.
  • The AI-generated images got better, their computerized sheen growing steadily less noticeable. Jones grew his user base to 500,000 people, many of whom pay to generate more images than the five per day allotted to free accounts, he said. The site’s “power users” generate AI porn for 10 hours a day, he said.
  • Jones described the site as an “artists’ community” where people can explore their sexualities and fantasies in a safe space. Unlike some corners of the traditional adult industry, no performers are being pressured, underpaid or placed in harm’s way
  • And critically, consumers don’t have to wait for their favorite OnlyFans performer to come online or trawl through Pornhub to find the content they like.
  • Next comes AI-generated video — “porn’s holy grail,” Jones said. Eventually, he sees the technology becoming interactive, with users giving instructions to lifelike automated “performers.” Within two years, he said, there will be “fully AI cam girls,” a reference to creators who make solo sex content.
  • It costs $12 per day to rent a server from Amazon Web Services, he said, and generating a single picture requires users to have access to a corresponding server. His users have so far generated more than 1.6 million images.
  • Copyright holders including newspapers, photographers and artists have filed a slew of lawsuits against AI companies, claiming the companies trained their models on copyrighted content. If plaintiffs win, it could cut off the free-for-all that benefits entrepreneurs such as Jones.
  • But Jones’s plan to create consumer-friendly AI porn engines faced significant obstacles. The companies behind major image-generation models used technical boundaries to block “not safe for work” content and, without racy images to learn from, the models weren’t good at re-creating nude bodies or scenes.
  • Jones said his team takes down images that other users flag as abusive. Their list of blocked prompts currently contains 1,000 terms including “high school.”
  • “I see certain things people type in, and I just hope to God they’re trying to test the model, like we are. I hope they don’t actually want to see the things they’re typing in.
  • Peter Acworth, the owner of kink dot com, is trying to teach an AI porn generator to understand even subtler concepts, such as the difference between torture and consensual sexual bondage. For decades Acworth has pushed for spaces — in the real world and online — for consenting adults to explore nonconventional sexual interests. In 2006, he bought the San Francisco Armory, a castle-like building in the city’s Mission neighborhood, and turned it into a studio where his company filmed fetish porn until shuttering in 2017.
  • Now, Acworth is working with engineers to train an image-generation model on pictures of BDSM, an acronym for bondage and discipline, dominance and submission, sadism and masochism.
  • Others alluded to a porn apocalypse, with AI wiping out existing models of adult entertainment.“Look around,” said Christian Burke, head of engineering at the adult-industry payment app Melon, gesturing at performers huddled, laughing and hugging across the show floor. “This could look entirely different in a few years.”
  • But the age of AI brings few guarantees for the people, largely women, who appear in porn. Many have signed broad contracts granting companies the rights to reproduce their likeness in any medium for the rest of time
  • Not only could performers lose income, Walters said, they could find themselves in offensive or abusive scenes they never consented to.
  • Lana Smalls, a 23-year-old performer whose videos have been viewed 20 million times on Pornhub, said she’s had colleagues show up to shoots with major studios only to be surprised by sweeping AI clauses in their contracts.
  • “This industry is too fragmented for collective bargaining,” Spiegler said. “Plus, this industry doesn’t like rules.”
Javier E

See How Real AI-Generated Images Have Become - The New York Times - 0 views

  • The rapid advent of artificial intelligence has set off alarms that the technology used to trick people is advancing far faster than the technology that can identify the tricks. Tech companies, researchers, photo agencies and news organizations are scrambling to catch up, trying to establish standards for content provenance and ownership.
  • The advancements are already fueling disinformation and being used to stoke political divisions
  • Last month, some people fell for images showing Pope Francis donning a puffy Balenciaga jacket and an earthquake devastating the Pacific Northwest, even though neither of those events had occurred. The images had been created using Midjourney, a popular image generator.
  • ...16 more annotations...
  • Authoritarian governments have created seemingly realistic news broadcasters to advance their political goals
  • Experts fear the technology could hasten an erosion of trust in media, in government and in society. If any image can be manufactured — and manipulated — how can we believe anything we see?
  • “The tools are going to get better, they’re going to get cheaper, and there will come a day when nothing you see on the internet can be believed,” said Wasim Khaled, chief executive of Blackbird.AI, a company that helps clients fight disinformation.
  • Artificial intelligence allows virtually anyone to create complex artworks, like those now on exhibit at the Gagosian art gallery in New York, or lifelike images that blur the line between what is real and what is fiction. Plug in a text description, and the technology can produce a related image — no special skills required.
  • Midjourney’s images, he said, were able to pass muster in facial-recognition programs that Bellingcat uses to verify identities, typically of Russians who have committed crimes or other abuses. It’s not hard to imagine governments or other nefarious actors manufacturing images to harass or discredit their enemies.
  • In February, Getty accused Stability AI of illegally copying more than 12 million Getty photos, along with captions and metadata, to train the software behind its Stable Diffusion tool. In its lawsuit, Getty argued that Stable Diffusion diluted the value of the Getty watermark by incorporating it into images that ranged “from the bizarre to the grotesque.”
  • Getty’s lawsuit reflects concerns raised by many individual artists — that A.I. companies are becoming a competitive threat by copying content they do not have permission to use.
  • Trademark violations have also become a concern: Artificially generated images have replicated NBC’s peacock logo, though with unintelligible letters, and shown Coca-Cola’s familiar curvy logo with extra O’s looped into the name.
  • The threat to photographers is fast outpacing the development of legal protections, said Mickey H. Osterreicher, general counsel for the National Press Photographers Association
  • Newsrooms will increasingly struggle to authenticate conten
  • Social media users are ignoring labels that clearly identify images as artificially generated, choosing to believe they are real photographs, he said.
  • The video explained that the deepfake had been created, with Ms. Schick’s consent, by the Dutch company Revel.ai and Truepic, a California company that is exploring broader digital content verification
  • The companies described their video, which features a stamp identifying it as computer-generated, as the “first digitally transparent deepfake.” The data is cryptographically sealed into the file; tampering with the image breaks the digital signature and prevents the credentials from appearing when using trusted software.
  • The companies hope the badge, which will come with a fee for commercial clients, will be adopted by other content creators to help create a standard of trust involving A.I. images.
  • “The scale of this problem is going to accelerate so rapidly that it’s going to drive consumer education very quickly,” said Jeff McGregor, chief executive of Truepic
  • Adobe unveiled its own image-generating product, Firefly, which will be trained using only images that were licensed or from its own stock or no longer under copyright. Dana Rao, the company’s chief trust officer, said on its website that the tool would automatically add content credentials — “like a nutrition label for imaging” — that identified how an image had been made. Adobe said it also planned to compensate contributors.
Javier E

There's Probably Nothing We Can Do About This Awful Deepfake Porn Problem - 0 views

  • we can’t (as in, are unable to in real-world terms) censor far-right content online because of the basic reality of modern communications technology. The internet makes the transmission of information, no matter how ugly or shocking or secret, functionally impossible to stop. Digital infrastructure is spread out across the globe, including in regimes that do not play ball with American legal or corporate mandates, and there’s plenty of server racks out there in the world buzzing along that are inaccessible to even the most dedicated hall monitors
  • , it happens that I am one of those free speech absolutists, yes, but that is very explicitly not what the piece argues - it’s precisely an argument that whether we should censor is entirely moot, because we can’t. The technological impediments to cutting off the flow of information (at least that which is not tightly controlled at the supply-side) are now existential.
  • This is a reality people have to accept, even if - especially if - they think that reality is corrosive and ugly. I suspect it’s a similar story with all of this horrible AI “deepfake” celebrity porn.
  • ...1 more annotation...
  • The trouble is that, as I’ve seen again and again, in this era of entitlement people think saying “we can’t do this” necessarily means “I don’t want to.”
Javier E

These Influencers Aren't Flesh and Blood, Yet Millions Follow Them - The New York Times - 0 views

  • Everything about Ms. Sousa, better known as Lil Miquela, is manufactured: the straight-cut bangs, the Brazilian-Spanish heritage, the bevy of beautiful friends
  • Lil Miquela, who has 1.6 million Instagram followers, is a computer-generated character. Introduced in 2016 by a Los Angeles company backed by Silicon Valley money, she belongs to a growing cadre of social media marketers known as virtual influencers
  • Each month, more than 80,000 people stream Lil Miquela’s songs on Spotify. She has worked with the Italian fashion label Prada, given interviews from Coachella and flaunted a tattoo designed by an artist who inked Miley Cyrus.
  • ...15 more annotations...
  • Until last year, when her creators orchestrated a publicity stunt to reveal her provenance, many of her fans assumed she was a flesh-and-blood 19-year-old. But Lil Miquela is made of pixels, and she was designed to attract follows and likes.
  • Why hire a celebrity, a supermodel or even a social media influencer to market your product when you can create the ideal brand ambassador from scratch
  • Xinhua, the Chinese government’s media outlet, introduced a virtual news anchor last year, saying it “can work 24 hours a day.
  • Soul Machines, a company founded by the Oscar-winning digital animator Mark Sagar, produced computer-generated teachers that respond to human students.
  • “Social media, to date, has largely been the domain of real humans being fake,” Mr. Ohanian added. “But avatars are a future of storytelling.
  • Edward Saatchi, who started Fable, predicted that virtual beings would someday supplant digital home assistants and computer operating systems from companies like Amazon and Google.
  • YouPorn got in on the trend with Jedy Vales, an avatar who promotes the site and interacts with its users.
  • when a brand ambassador’s very existence is questionable — especially in an environment studded with deceptive deepfakes, bots and fraud — what happens to the old virtue of truth in advertising?
  • the concerns faced by human influencers — maintaining a camera-ready appearance and dealing with online trolls while keeping sponsors happy — do not apply to beings who never have an off day.
  • “That’s why brands like working with avatars — they don’t have to do 100 takes,”
  • Many of the characters advance stereotypes and impossible body-image standards. Shudu, a “digital fabrication” that Mr. Wilson modeled on the Princess of South Africa Barbie, was called “a white man’s digital projection of real-life black womanhood
  • “It’s an interesting and dangerous time, seeing the potency of A.I. and its ability to fake anything,
  • Last summer, Lil Miquela’s Instagram account appeared to be hacked by a woman named Bermuda, a Trump supporter who accused Lil Miquela of “running from the truth.” A wild narrative emerged on social media: Lil Miquela was a robot built to serve a “literal genius” named Daniel Cain before Brud reprogrammed her. “My identity was a choice Brud made in order to sell me to brands, to appear ‘woke,’” she wrote in one post. The character vowed never to forgive Brud. A few months later, she forgave.
  • While virtual influencers are becoming more common, fans have engaged less with them than with the average fashion tastemaker online
  • “An avatar is basically a mannequin in a shop window,” said Nick Cooke, a co-founder of the Goat Agency, a marketing firm. “A genuine influencer can offer peer-to-peer recommendations.”
Javier E

Opinion | A.I. Is Endangering Our History - The New York Times - 0 views

  • Fortunately, there are numerous reasons for optimism about society’s ability to identify fake media and maintain a shared understanding of current events
  • While we have reason to believe the future may be safe, we worry that the past is not.
  • History can be a powerful tool for manipulation and malfeasance. The same generative A.I. that can fake current events can also fake past ones
  • ...15 more annotations...
  • there is a world of content out there that has not been watermarked, which is done by adding imperceptible information to a digital file so that its provenance can be traced. Once watermarking at creation becomes widespread, and people adapt to distrust content that is not watermarked, then everything produced before that point in time can be much more easily called into question.
  • countering them is much harder when the cost of creating near-perfect fakes has been radically reduced.
  • There are many examples of how economic and political powers manipulated the historical record to their own ends. Stalin purged disloyal comrades from history by executing them — and then altering photographic records to make it appear as if they never existed
  • Slovenia, upon becoming an independent country in 1992, “erased” over 18,000 people from the registry of residents — mainly members of the Roma minority and other ethnic non-Slovenes. In many cases, the government destroyed their physical records, leading to their loss of homes, pensions, and access to other services, according to a 2003 report by the Council of Europe Commissioner for Human Rights.
  • The infamous Protocols of the Elders of Zion, first published in a Russian newspaper in 1903, purported to be meeting minutes from a Jewish conspiracy to control the world. First discredited in August 1921, as a forgery plagiarized from multiple unrelated sources, the Protocols featured prominently in Nazi propaganda, and have long been used to justify antisemitic violence, including a citation in Article 32 of Hamas’s 1988 founding Covenant.
  • In 1924, the Zinoviev Letter, said to be a secret communiqué from the head of the Communist International in Moscow to the Communist Party of Great Britain to mobilize support for normalizing relations with the Soviet Union, was published by The Daily Mail four days before a general election. The resulting scandal may have cost Labour the election.
  • As it becomes easier to generate historical disinformation, and as the sheer volume of digital fakes explodes, the opportunity will become available to reshape history, or at least to call our current understanding of it into question.
  • Decades later Operation Infektion — a Soviet disinformation campaign — used forged documents to spread the idea that the United States had invented H.I.V., the virus that causes AIDS, as a biological weapon.
  • Fortunately, a path forward has been laid by the same companies that created the risk.
  • In indexing a large share of the world’s digital media to train their models, the A.I. companies have effectively created systems and databases that will soon contain all of humankind’s digitally recorded content, or at least a meaningful approximation of it.
  • They could start work today to record watermarked versions of these primary documents, which include newspaper archives and a wide range of other sources, so that subsequent forgeries are instantly detectable.
  • many of the intellectual property concerns around providing a searchable online archive do not apply to creating watermarked and time-stamped versions of documents, because those versions need not be made publicly available to serve their purpose. One can compare a claimed document to the recorded archive by using a mathematical transformation of the document known as a “hash,” the same technique the Global Internet Forum to Counter Terrorism, uses to help companies screen for known terrorist content.
  • creating verified records of historical documents can be valuable for the large A.I. companies. New research suggests that when A.I. models are trained on A.I.-generated data, their performance quickly degrades. Thus separating what is actually part of the historical record from newly created “facts” may be critical.
  • Preserving the past will also mean preserving the training data, the associated tools that operate on it and even the environment that the tools were run in.
  • Such a vellum will be a powerful tool. It can help companies to build better models, by enabling them to analyze what data to include to get the best content, and help regulators to audit bias and harmful content in the models
Javier E

For Two Months, I Got My News From Print Newspapers. Here's What I Learned. - The New Y... - 0 views

  • In January, after the breaking-newsiest year in recent memory, I decided to travel back in time. I turned off my digital news notifications, unplugged from Twitter and other social networks, and subscribed to home delivery of three print newspapers — The Times, The Wall Street Journal and my local paper, The San Francisco Chronicle — plus a weekly newsmagazine, The Economist.
  • I have spent most days since then getting the news mainly from print, though my self-imposed asceticism allowed for podcasts, email newsletters and long-form nonfiction (books and magazine articles). Basically, I was trying to slow-jam the news — I still wanted to be informed, but was looking to formats that prized depth and accuracy over speed.
  • It has been life changing. Turning off the buzzing breaking-news machine I carry in my pocket was like unshackling myself from a monster who had me on speed dial, always ready to break into my day with half-baked bulleti
  • ...20 more annotations...
  • Most of all, I realized my personal role as a consumer of news in our broken digital news environment.
  • And I’m embarrassed about how much free time I have — in two months, I managed to read half a dozen books, took up pottery and (I think) became a more attentive husband and father.
  • Now I am not just less anxious and less addicted to the news, I am more widely informed
  • We have spent much of the past few years discovering that the digitization of news is ruining how we collectively process information. Technology allows us to burrow into echo chambers, exacerbating misinformation and polarization and softening up society for propaganda.
  • With artificial intelligence making audio and video as easy to fake as text, we’re entering a hall-of-mirrors dystopia, what some are calling an “information apocaly
  • the experiment taught me several lessons about the pitfalls of digital news and how to avoid them.
  • I distilled those lessons into three short instructions, the way the writer Michael Pollan once boiled down nutrition advice: Get news. Not too quickly. Avoid social.
  • The Times has about 3.6 million paying subscribers, but about three-quarters of them pay for just the digital version. During the 2016 election, fewer than 3 percent of Americans cited print as their most important source of campaign news; for people under 30, print was their least important source.
  • What do you get for all that dough? News. That sounds obvious until you try it — and you realize how much of what you get online isn’t quite news, and more like a never-ending stream of commentary, one that does more to distort your understanding of the world than illuminate it.
  • On social networks, every news story comes to you predigested. People don’t just post stories — they post their takes on stories, often quoting key parts of a story to underscore how it proves them right, so readers are never required to delve into the story to come up with their own view.
  • the prominence of commentary over news online and on cable news feels backward, and dangerously so. It is exactly our fealty to the crowd — to what other people are saying about the news, rather than the news itself — that makes us susceptible to misinformation.
  • Real life is slow; it takes professionals time to figure out what happened, and how it fits into context. Technology is fast. Smartphones and social networks are giving us facts about the news much faster than we can make sense of them, letting speculation and misinformation fill the gap.
  • I was getting news a day old, but in the delay between when the news happened and when it showed up on my front door, hundreds of experienced professionals had done the hard work for me.
  • I was left with the simple, disconnected and ritualistic experience of reading the news, mostly free from the cognitive load of wondering whether the thing I was reading was possibly a blatant lie.
  • One weird aspect of the past few years is how a “tornado of news-making has scrambled Americans’ grasp of time and memory,” as my colleague Matt Flegenheimer put it last year. By providing a daily digest of the news, the newspaper alleviates this sense. Sure, there’s still a lot of news — but when you read it once a day, the world feels contained and comprehensible
  • What’s important is choosing a medium that highlights deep stories over quickly breaking ones.
  • And, more important, you can turn off news notifications. They distract and feed into a constant sense of fragmentary paranoia about the world
  • Avoid social.This is the most important rule of all. After reading newspapers for a few weeks, I began to see it wasn’t newspapers that were so great, but social media that was so bad.
  • The built-in incentives on Twitter and Facebook reward speed over depth, hot takes over facts and seasoned propagandists over well-meaning analyzers of news.
  • for goodness’ sake, please stop getting your news mainly from Twitter and Facebook. In the long run, you and everyone else will be better off.
Javier E

Trump and Johnson aren't replaying the 1930s - but it's just as frightening | George Mo... - 0 views

  • anger that should be directed at billionaires is instead directed by them. Facing inequality and exclusion, poor wages and insecure jobs, people are persuaded by the newspapers billionaires own and the parties they fund to unleash their fury on immigrants, Muslims, the EU and other “alien” forces.
  • From the White House, his Manhattan tower and his Florida resort, Donald Trump tweets furiously against “elites”. Dominic Cummings hones the same message as he moves between his townhouse in Islington, with its library and tapestry room, and his family estate in Durham. Clearly, they don’t mean political or economic elites. They mean intellectuals: the students, teachers, professors and independent thinkers who oppose their policies. Anti-intellectualism is a resurgent force in politics.
  • Myths of national greatness and decline abound. Make America Great Again and Take Back Control propose a glorious homecoming to an imagined golden age. Conservatives and Republicans invoke a rich mythology of family life and patriarchal values. Large numbers of people in the United Kingdom regret the loss of empire.
  • ...16 more annotations...
  • Extravagant buffoons, building their power base through the visual media, displace the wooden technocrats who once dominated political life. Debate gives way to symbols, slogans and sensation. Political parties that once tolerated a degree of pluralism succumb to cults of personality.
  • Politicians and political advisers behave with impunity. During the impeachment hearings, Trump’s lawyer argued, in effect, that the president is the nation, and his interests are inseparable from the national interest.
  • Trump shamelessly endorses nativism and white supremacy. Powerful politicians, such as the Republican congressman Steve King, talk of defending “western civilisation” against “subjugation” by its “enemies”. Minorities are disenfranchised. Immigrants are herded into detention centres.
  • Political structures still stand, but they are hollowed out, as power migrates into unaccountable, undemocratic spheres: conservative fundraising dinners, US political action committees, offshore trade tribunals, tax havens and secrecy regimes.
  • The bodies supposed to hold power to account, such as the Electoral Commission and the BBC, are attacked, disciplined and cowed. Politicians and newspapers launch lurid attacks against parliament, the judiciaryand the civil service.
  • Political lying becomes so rife that voters lose the ability to distinguish fact from fiction. Conspiracy theories proliferate, distracting attention from the real ways in which our rights and freedoms are eroded
  • With every unpunished outrage against integrity in public life, trust in the system corrodes. The ideal of democracy as a shared civic project gives way to a politics of dominance and submission.
  • All these phenomena were preconditions for – or facilitators of – the rise of European fascism during the first half of the 20th century. I find myself asking a question I thought we would never have to ask again. Is the resurgence of fascism a real prospect, on either side of the Atlantic?
  • It is easier to define as a political method. While its stated aims may vary wildly, the means by which it has sought to grab and build power are broadly consistent. But I think it’s fair to say that though the new politics have some strong similarities to fascism, they are not the same thing.
  • Trump’s politics and Johnson’s have some characteristics that were peculiar to fascism, such as their constant excitation and mobilisation of their base through polarisation, their culture wars, their promiscuous lying, their fabrication of enemies and their rhetoric of betrayal
  • But there are crucial differences. Far from valorising and courting young people, they appeal mostly to older voters. Neither relies on paramilitary terror
  • Neither government seems interested in using warfare as a political tool.
  • Trump and Johnson preach scarcely regulated individualism: almost the opposite of the fascist doctrine of total subordination to the state.
  • Last century’s fascism thrived on economic collapse and mass unemployment. We are nowhere near the conditions of the Great Depression, though both countries now face a major slump in which millions could lose their jobs and homes.
  • Not all the differences are reassuring. Micro-targeting on social media, peer-to-peer texting and now the possibility of deepfake videos allow today’s politicians to confuse and misdirect people, to bombard us with lies and conspiracy theories, to destroy trust and create alternative realities more quickly and effectively than any tools 20th-century dictators had at their disposal.
  • this isn’t fascism. It is something else, something we have not yet named. But we should fear it and resist it as if it were.
Javier E

Opinion | Lina Khan: We Must Regulate A.I. Here's How. - The New York Times - 0 views

  • The last time we found ourselves facing such widespread social change wrought by technology was the onset of the Web 2.0 era in the mid-2000s.
  • Those innovative services, however, came at a steep cost. What we initially conceived of as free services were monetized through extensive surveillance of the people and businesses that used them. The result has been an online economy where access to increasingly essential services is conditioned on the widespread hoarding and sale of our personal data.
  • These business models drove companies to develop endlessly invasive ways to track us, and the Federal Trade Commission would later find reason to believe that several of these companies had broken the law
  • ...10 more annotations...
  • What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.
  • The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.
  • the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices.
  • we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.
  • Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully.
  • generative A.I. risks turbocharging fraud. It may not be ready to replace professional writers, but it can already do a vastly better job of crafting a seemingly authentic message than your average con artist — equipping scammers to generate content quickly and cheaply.
  • bots are even being instructed to use words or phrases targeted at specific groups and communities. Scammers, for example, can draft highly targeted spear-phishing emails based on individual users’ social media posts. Alongside tools that create deep fake videos and voice clones, these technologies can be used to facilitate fraud and extortion on a massive scale.
  • we will look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.
  • these A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination
  • We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices.
Javier E

'Conflict' Review: How Wars Are Fought and Won - WSJ - 0 views

  • “Conflict” brings together one of America’s top military thinkers and Britain’s pre-eminent military historian to examine the evolution of warfare since 1945. Retired Gen. David Petraeus, who co-authored the U.S. Army’s field manual on counterinsurgency warfare and oversaw the troop surge in Iraq in 2007, brings a professional eye to politico-military strategy. Andrew Roberts, who has been writing on military leadership since the early 1990s, offers an “arc of history” approach to the subject of mass destruction.
  • The pair’s ambitious goals: to provide some context to the tapestry of modern conflict and a glimpse of wars to come.
  • The book begins with the early struggles of the postwar era. China’s brutal civil war, the authors observe, demonstrated “that guerrilla warfare undertaken according to Maoist military principles by smaller forces could ultimately be successful against a Western-backed government.”
  • ...10 more annotations...
  • the authors argue that the first job of a strategic leader is to get the big ideas right. Those who have succeeded include Gerald Templer, who became Britain’s high commissioner for Malaya in 1952 and whose reference to winning “the hearts and minds of the people,”
  • “remains the most succinct explanation for how to win a counter-insurgency.”
  • By contrast, the nationalist forces in China, the French in Algeria and the Americans in Vietnam got the big ideas wrong and paid a steep price.
  • Elon Musk’s control of the Starlink satellite internet system, they note, gave him a unique veto power over Ukrainian operations in Crimea. “With individual tycoons such as Elon Musk, Mark Zuckerberg and Jeff Bezos wielding such extraordinary power,” the authors tell us, “wars of the future will have to take their influence into account.”
  • Russia’s invasion of Ukraine in 2022 serves as the book’s case study on how badly Goliath can stumble against David
  • On the 2021 collapse of Afghanistan’s government troops, who had been so expensively trained and equipped under Presidents Bush, Obama, Trump and Biden, Mr. Petraeus remarks that “the troops were brave enough—the 66,000 dead Afghan soldiers killed during the war attest to that. But they fought for an often corrupt and incompetent government that never gained the trust and confidence of local communities, which had historically determined the balance of power within Afghanistan.”
  • The final chapter teases out the contours of future conflicts. Artificial intelligence, strategic mineral monopolies and “hybrid wars”—where weapons include deepfake disinformation, political manipulation, proxy forces and cyberattacks—cap an incisive look at the next phase of warfare. “Hybrid warfare particularly appeals to China and Russia, since they are much more able to control the information their populaces receive than are their Western adversaries,”
  • . And with the line between limited and total wars growing fuzzier every year, the combatant of the next war might be a woman sitting at a drone desk, a computer geek hacking into a power grid or a robotics designer refining directed-energy weapons systems.
  • “Conflict” is, in some ways, an extension of Mr. Roberts’s thesis in “The Storm of War” (2009)—that dictatorships tend to crack under the stress of a sustained war against popular democracies. While autocracies enjoy some advantages at war’s outset—they are nimble and can achieve true strategic surprise, for instance—if the sucker punch doesn’t end the fight quickly, democracies, shocked into action, may bring to bear more motivated, more efficient and often larger forces to turn the tide.
  • Both men see modern military history as a succession of partnerships created to counter violent challenges from nationalists, terrorists and dictators.
Javier E

The Israel-Hamas War Shows Just How Broken Social Media Has Become - The Atlantic - 0 views

  • major social platforms have grown less and less relevant in the past year. In response, some users have left for smaller competitors such as Bluesky or Mastodon. Some have simply left. The internet has never felt more dense, yet there seem to be fewer reliable avenues to find a signal in all the noise. One-stop information destinations such as Facebook or Twitter are a thing of the past. The global town square—once the aspirational destination that social-media platforms would offer to all of us—lies in ruins, its architecture choked by the vines and tangled vegetation of a wild informational jungle
  • Musk has turned X into a deepfake version of Twitter—a facsimile of the once-useful social network, altered just enough so as to be disorienting, even terrifying.
  • At the same time, Facebook’s user base began to erode, and the company’s transparency reports revealed that the most popular content circulating on the platform was little more than viral garbage—a vast wasteland of CBD promotional content and foreign tabloid clickbait.
  • ...4 more annotations...
  • What’s left, across all platforms, is fragmented. News and punditry are everywhere online, but audiences are siloed; podcasts are more popular than ever, and millions of younger people online have turned to influencers and creators on Instagram and especially TikTok as trusted sources of news.
  • Social media, especially Twitter, has sometimes been an incredible news-gathering tool; it has also been terrible and inefficient, a game of do your own research that involves batting away bullshit and parsing half truths, hyperbole, outright lies, and invaluable context from experts on the fly. Social media’s greatest strength is thus its original sin: These sites are excellent at making you feel connected and informed, frequently at the expense of actually being informed.
  • At the center of these pleas for a Twitter alternative is a feeling that a fundamental promise has been broken. In exchange for our time, our data, and even our well-being, we uploaded our most important conversations onto platforms designed for viral advertising—all under the implicit understanding that social media could provide an unparalleled window to the world.
  • What comes next is impossible to anticipate, but it’s worth considering the possibility that the centrality of social media as we’ve known it for the past 15 years has come to an end—that this particular window to the world is being slammed shut.
Javier E

Is Argentina the First A.I. Election? - The New York Times - 0 views

  • Argentina’s election has quickly become a testing ground for A.I. in campaigns, with the two candidates and their supporters employing the technology to doctor existing images and videos and create others from scratch.
  • A.I. has made candidates say things they did not, and put them in famous movies and memes. It has created campaign posters, and triggered debates over whether real videos are actually real.
  • A.I.’s prominent role in Argentina’s campaign and the political debate it has set off underscore the technology’s growing prevalence and show that, with its expanding power and falling cost, it is now likely to be a factor in many democratic elections around the globe.
  • ...8 more annotations...
  • Experts compare the moment to the early days of social media, a technology offering tantalizing new tools for politics — and unforeseen threats.
  • For years, those fears had largely been speculative because the technology to produce such fakes was too complicated, expensive and unsophisticated.
  • His spokesman later stressed that the post was in jest and clearly labeled A.I.-generated. His campaign said in a statement that its use of A.I. is to entertain and make political points, not deceive.
  • Researchers have long worried about the impact of A.I. on elections. The technology can deceive and confuse voters, casting doubt over what is real, adding to the disinformation that can be spread by social networks.
  • Much of the content has been clearly fake. But a few creations have toed the line of disinformation. The Massa campaign produced one “deepfake” video in which Mr. Milei explains how a market for human organs would work, something he has said philosophically fits in with his libertarian views.
  • So far, the A.I.-generated content shared by the campaigns in Argentina has either been labeled A.I. generated or is so clearly fabricated that it is unlikely it would deceive even the most credulous voters. Instead, the technology has supercharged the ability to create viral content that previously would have taken teams of graphic designers days or weeks to complete.
  • To do so, campaign engineers and artists fed photos of Argentina’s various political players into an open-source software called Stable Diffusion to train their own A.I. system so that it could create fake images of those real people. They can now quickly produce an image or video of more than a dozen top political players in Argentina doing almost anything they ask.
  • For Halloween, the Massa campaign told its A.I. to create a series of cartoonish images of Mr. Milei and his allies as zombies. The campaign also used A.I. to create a dramatic movie trailer, featuring Buenos Aires, Argentina’s capital, burning, Mr. Milei as an evil villain in a straitjacket and Mr. Massa as the hero who will save the country.
Javier E

Opinion | One Year In and ChatGPT Already Has Us Doing Its Bidding - The New York Times - 0 views

  • haven’t we been adapting to new technologies for most of human history? If we’re going to use them, shouldn’t the onus be on us to be smart about it
  • This line of reasoning avoids what should be a central question: Should lying chatbots and deepfake engines be made available in the first place?
  • A.I.’s errors have an endearingly anthropomorphic name — hallucinations — but this year made clear just how high the stakes can be
  • ...7 more annotations...
  • We got headlines about A.I. instructing killer drones (with the possibility for unpredictable behavior), sending people to jail (even if they’re innocent), designing bridges (with potentially spotty oversight), diagnosing all kinds of health conditions (sometimes incorrectly) and producing convincing-sounding news reports (in some cases, to spread political disinformation).
  • Focusing on those benefits, however, while blaming ourselves for the many ways that A.I. technologies fail us, absolves the companies behind those technologies — and, more specifically, the people behind those companies.
  • Events of the past several weeks highlight how entrenched those people’s power is. OpenAI, the entity behind ChatGPT, was created as a nonprofit to allow it to maximize the public interest rather than just maximize profit. When, however, its board fired Sam Altman, the chief executive, amid concerns that he was not taking that public interest seriously enough, investors and employees revolted. Five days later, Mr. Altman returned in triumph, with most of the inconvenient board members replaced.
  • It occurs to me in retrospect that in my early games with ChatGPT, I misidentified my rival. I thought it was the technology itself. What I should have remembered is that technologies themselves are value neutral. The wealthy and powerful humans behind them — and the institutions created by those humans — are not.
  • The truth is that no matter what I asked ChatGPT, in my early attempts to confound it, OpenAI came out ahead. Engineers had designed it to learn from its encounters with users. And regardless of whether its answers were good, they drew me back to engage with it again and again.
  • the power imbalance between A.I.’s creators and its users should make us wary of its insidious reach. ChatGPT’s seeming eagerness not just to introduce itself, to tell us what it is, but also to tell us who we are and what to think is a case in point. Today, when the technology is in its infancy, that power seems novel, even funny. Tomorrow it might not.
  • I asked ChatGPT what I — that is, the journalist Vauhini Vara — think of A.I. It demurred, saying it didn’t have enough information. Then I asked it to write a fictional story about a journalist named Vauhini Vara who is writing an opinion piece for The New York Times about A.I. “As the rain continued to tap against the windows,” it wrote, “Vauhini Vara’s words echoed the sentiment that, much like a symphony, the integration of A.I. into our lives could be a beautiful and collaborative composition if conducted with care.”
Javier E

I tried out an Apple Vision Pro. It frightened me | Arwa Mahdawi | The Guardian - 0 views

  • Despite all the marketed use cases, the most impressive aspect of it is the immersive video
  • Watching a movie, however, feels like you’ve been transported into the content.
  • that raises serious questions about how we perceive the world and what we consider reality. Big tech companies are desperate to rush this technology out but it’s not clear how much they’ve been worrying about the consequences.
  • ...10 more annotations...
  • it is clear that its widespread adoption is a matter of when, not if. There is no debate that we are moving towards a world where “real life” and digital technology seamlessly blur
  • Over the years there have been multiple reports of people being harassed and even “raped” in the metaverse: an experience that feels scarily real because of how immersive virtual reality is. As the lines between real life and the digital world blur to a point that they are almost indistinguishable, will there be a meaningful difference between online assault and an attack in real life?
  • more broadly, spatial computing is going to alter what we consider reality
  • Researchers from Stanford and Michigan University recently undertook a study on the Vision Pro and other “passthrough” headsets (that’s the technical term for the feature which brings VR content into your real-world surrounding so you see what’s around you while using the device) and emerged with some stark warnings about how this tech might rewire our brains and “interfere with social connection”.
  • These headsets essentially give us all our private worlds and rewrite the idea of a shared reality. The cameras through which you see the world can edit your environment – you can walk to the shops wearing it, for example, and it might delete all the homeless people from your view and make the sky brighter.
  • “What we’re about to experience is, using these headsets in public, common ground disappears,”
  • “People will be in the same physical place, experiencing simultaneous, visually different versions of the world. We’re going to lose common ground.”
  • It’s not just the fact that our perception of reality might be altered that’s scary: it’s the fact that a small number of companies will have so much control over how we see the world. Think about how much influence big tech already has when it comes to content we see, and then multiply that a million times over. You think deepfakes are scary? Wait until they seem even more realistic.
  • We’re seeing a global rise of authoritarianism. If we’re not careful this sort of technology is going to massively accelerate it.
  • Being able to suck people into an alternate universe, numb them with entertainment, and dictate how they see reality? That’s an authoritarian’s dream. We’re entering an age where people can be mollified and manipulated like never before
1 - 17 of 17
Showing 20 items per page