Skip to main content

Home/ TOK Friends/ Group items tagged deepfake

Rss Feed Group items tagged

Javier E

Is Anything Still True? On the Internet, No One Knows Anymore - WSJ - 1 views

  • Creating and disseminating convincing propaganda used to require the resources of a state. Now all it takes is a smartphone.
  • Generative artificial intelligence is now capable of creating fake pictures, clones of our voices, and even videos depicting and distorting world events. The result: From our personal circles to the political circuses, everyone must now question whether what they see and hear is true.
  • exposure to AI-generated fakes can make us question the authenticity of everything we see. Real images and real recordings can be dismissed as fake. 
  • ...20 more annotations...
  • “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says David Rand, a professor at MIT Sloan who studies the creation, spread and impact of misinformation.
  • This problem, which has grown more acute in the age of generative AI, is known as the “liar’s dividend,
  • The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe, adds DiResta, leading to what she calls “bespoke realities.”
  • Examples of misleading content created by generative AI are not hard to come by, especially on social media
  • The signs that an image is AI-generated are easy to miss for a user simply scrolling past, who has an instant to decide whether to like or boost a post on social media. And as generative AI continues to improve, it’s likely that such signs will be harder to spot in the future.
  • “What our work suggests is that most regular people do not want to share false things—the problem is they are not paying attention,”
  • in the course of a lawsuit over the death of a man using Tesla’s “full self-driving” system, Elon Musk’s lawyers responded to video evidence of Musk making claims about this software by suggesting that the proliferation of “deepfakes” of Musk was grounds to dismiss such evidence. They advanced that argument even though the clip of Musk was verifiably real
  • are now using its existence as a pretext to dismiss accurate information
  • People’s attention is already limited, and the way social media works—encouraging us to gorge on content, while quickly deciding whether or not to share it—leaves us precious little capacity to determine whether or not something is true
  • If the crisis of authenticity were limited to social media, we might be able to take solace in communication with those closest to us. But even those interactions are now potentially rife with AI-generated fakes.
  • what sounds like a call from a grandchild requesting bail money may be scammers who have scraped recordings of the grandchild’s voice from social media to dupe a grandparent into sending money.
  • companies like Alphabet, the parent company of Google, are trying to spin the altering of personal images as a good thing. 
  • With its latest Pixel phone, the company unveiled a suite of new and upgraded tools that can automatically replace a person’s face in one image with their face from another, or quickly remove someone from a photo entirely.
  • Joseph Stalin, who was fond of erasing people he didn’t like from official photos, would have loved this technology.
  • In Google’s defense, it is adding a record of whether an image was altered to data attached to it. But such metadata is only accessible in the original photo and some copies, and is easy enough to strip out.
  • The rapid adoption of many different AI tools means that we are now forced to question everything that we are exposed to in any medium, from our immediate communities to the geopolitical, said Hany Farid, a professor at the University of California, Berkeley who
  • To put our current moment in historical context, he notes that the PC revolution made it easy to store and replicate information, the internet made it easy to publish it, the mobile revolution made it easier than ever to access and spread, and the rise of AI has made creating misinformation a cinch. And each revolution arrived faster than the one before it.
  • Not everyone agrees that arming the public with easy access to AI will exacerbate our current difficulties with misinformation. The primary argument of such experts is that there is already vastly more misinformation on the internet than a person can consume, so throwing more into the mix won’t make things worse.
  • it’s not exactly reassuring, especially given that trust in institutions is already at one of the lowest points in the past 70 years, according to the nonpartisan Pew Research Center, and polarization—a measure of how much we distrust one another—is at a high point.
  • “What happens when we have eroded trust in media, government, and experts?” says Farid. “If you don’t trust me and I don’t trust you, how do we respond to pandemics, or climate change, or have fair and open elections? This is how authoritarianism arises—when you erode trust in institutions.”
Javier E

FaceApp helped a middle-aged man become a popular younger woman. His fan base has never... - 1 views

  • Soya’s fame illustrated a simple truth: that social media is less a reflection of who we are, and more a performance of who we want to be.
  • It also seemed to herald a darker future where our fundamental senses of reality are under siege: The AI that allows anyone to fabricate a face can also be used to harass women with “deepfake” pornography, invent fraudulent LinkedIn personas and digitally impersonate political enemies.
  • As the photos began receiving hundreds of likes, Soya’s personality and style began to come through. She was relentlessly upbeat. She never sneered or bickered or trolled. She explored small towns, savored scenic vistas, celebrated roadside restaurants’ simple meals.
  • ...25 more annotations...
  • She took pride in the basic things, like cleaning engine parts. And she only hinted at the truth: When one fan told her in October, “It’s great to be young,” Soya replied, “Youth does not mean a certain period of life, but how to hold your heart.”
  • She seemed, well, happy, and FaceApp had made her that way. Creating the lifelike impostor had taken only a few taps: He changed the “Gender” setting to “Female,” the “Age” setting to “Teen,” and the “Impression” setting — a mix of makeup filters — to a glamorous look the app calls “Hollywood.”
  • Soya pouted and scowled on rare occasions when Nakajima himself felt frustrated. But her baseline expression was an extra-wide smile, activated with a single tap.
  • Nakajima grew his shimmering hair below his shoulders and raided his local convenience store for beauty supplies he thought would make the FaceApp images more convincing: blushes, eyeliners, concealers, shampoos.
  • “When I compare how I feel when I started to tweet as a woman and now, I do feel that I’m gradually gravitating toward this persona … this fantasy world that I created,” Nakajima said. “When I see photos of what I tweeted, I feel like, ‘Oh. That’s me.’ ”
  • The sensation Nakajima was feeling is so common that there’s a term for it: the Proteus effect, named for the shape-shifting Greek god. Stanford University researchers first coined it in 2007 to describe how people inhabiting the body of a digital avatar began to act the part
  • People made to appear taller in virtual-reality simulations acted more assertively, even after the experience ended. Prettier characters began to flirt.
  • What is it about online disguises? Why are they so good at bending people’s sense of self-perception?
  • they tap into this “very human impulse to play with identity and pretend to be someone you’re not.”
  • Users in the Internet’s early days rarely had any presumptions of authenticity, said Melanie C. Green, a University of Buffalo professor who studies technology and social trust. Most people assumed everyone else was playing a character clearly distinguished from their real life.
  • “This identity play was considered one of the huge advantages of being online,” Green said. “You could switch your gender and try on all of these different personas. It was a playground for people to explore.”
  • It wasn’t until the rise of giant social networks like Facebook — which used real identities to, among other things, supercharge targeted advertising — that this big game of pretend gained an air of duplicity. Spaces for playful performance shrank, and the biggest Internet watering holes began demanding proof of authenticity as a way to block out malicious intent.
  • The Web’s big shift from text to visuals — the rise of photo-sharing apps, live streams and video calls — seemed at first to make that unspoken rule of real identities concrete. It seemed too difficult to fake one’s appearance when everyone’s face was on constant display.
  • Now, researchers argue, advances in image-editing artificial intelligence have done for the modern Internet what online pseudonyms did for the world’s first chat rooms. Facial filters have allowed anyone to mold themselves into the character they want to play.
  • researchers fear these augmented reality tools could end up distorting the beauty standards and expectations of actual reality.
  • Some political and tech theorists worry this new world of synthetic media threatens to detonate our concept of truth, eroding our shared experiences and infusing every online relationship with suspicion and self-doubt.
  • Deceptive political memes, conspiracy theories, anti-vaccine hoaxes and other scams have torn the fabric of our democracy, culture and public health.
  • But she also thinks about her kids, who assume “that everything online is fabricated,” and wonders whether the rules of online identity require a bit more nuance — and whether that generational shift is already underway.
  • “Bots pretending to be people, automated representations of humanity — that, they perceive as exploitative,” she said. “But if it’s just someone engaging in identity experimentation, they’re like: ‘Yeah, that’s what we’re all doing.'
  • To their generation, “authenticity is not about: ‘Does your profile picture match your real face?’ Authenticity is: ‘Is your voice your voice?’
  • “Their feeling is: ‘The ideas are mine. The voice is mine. The content is mine. I’m just looking for you to receive it without all the assumptions and baggage that comes with it.’ That’s the essence of a person’s identity. That’s who they really are.”
  • But wasn’t this all just a big con? Nakajima had tricked people with a “cool girl” stereotype to boost his Twitter numbers. He hadn’t elevated the role of women in motorcycling; if anything, he’d supplanted them. And the character he’d created was paper thin: Soya had no internal complexity outside of what Nakajima had projected, just that eternally superimposed smile.
  • Perhaps he should have accepted his irrelevance and faded into the digital sunset, sharing his life for few to see. But some of Soya’s followers have said they never felt deceived: It was Nakajima — his enthusiasm, his attitude about life — they’d been charmed by all along. “His personality,” as one Twitter follower said, “shined through.”
  • In Nakajima’s mind, he’d used the tools of a superficial medium to craft genuine connections. He had not felt real until he had become noticed for being fake.
  • Nakajima said he doesn’t know how long he’ll keep Soya alive. But he said he’s grateful for the way she helped him feel: carefree, adventurous, seen.
Javier E

CarynAI, created with GPT-4 technology, will be your girlfriend - The Washington Post - 0 views

  • CarynAI also shows how AI applications can increase the ability of a single person to reach an audience of thousands in a way that, for users, may feel distinctly personal.
  • The impact could be enormous for someone forming something resembling a personal relationship with thousands or millions of online followers. It could also show how thin and tenuous these simulations of human connection could become.
  • CarynAI also is a reminder that sex and romance are often the first realm in which technological progress becomes profitable. Marjorie acknowledges that some of the exchanges with CarynAI become sexually explicit
  • ...11 more annotations...
  • CarynAI is the first major release from a company called Forever Voices. The company previously has created realistic AI chatbots that allow users to talk with replicated versions of Steve Jobs, Kanye West, Donald Trump and Taylor Swift
  • CarynAI is a far more sophisticated product, the company says, and part of Forever Voices’ new AI companion initiative, meant to provide users with a girlfriend-like experience that fans can emotionally bond with.
  • John Meyer, CEO and founder of Forever Voices, said that he created the company last year, after trying to use AI to develop ways to reconnect with his late father, who passed away in 2017. He built an AI voice chatbot that replicated his late father’s voice and personality to talk to and found the experience incredibly healing. “It was a remarkable experience to talk to him again in a super realistic way,” Meyer said. “I’ve been in tech my whole life, I’m a programmer, so it was easy for me to start building something like that especially as things got more advanced with the AI space.”
  • Meyer’s company has about 10 employees. One job Meyer is hoping to fill soon is chief ethics officer. “There are a lot of ways to do this wrong,”
  • One safeguard is trying to limit the amount of time a user is allowed to chat with CarynAI. To keep users from becoming addicted, CarynAI is programmed to wind down conversations after about an hour, encouraging users to pick back up later. But there is no hard time limit on use, and some users are spending hours speaking to CarynAI per day, according to Marjorie’s manager, Ishan Goel.
  • “I consider myself a futurist at heart and when I look into the future I believe this is the beginning of a very diverse future consisting of AI to human companionship,”
  • Elizabeth Snower, founder of ICONIQ, which creates conversational 3D avatars, predicts that soon there will be “AI influencers on every social platform that are influencing consumer decisions.”
  • “A lot of people have just been kind of really mad at the existence of this. They think that it’s the end of humanity,” she said.
  • Marjorie hopes the backlash will fade when other online personalities begin rolling out their own AI companions
  • “I think in the next five years, most Americans will have an AI companion in their pocket in some way, shape or form, whether it’s an ultra flirty AI that you’re dating, an AI that’s your personal trainer, or simply a tutor companion. Those are all things that we are building internally,
  • That strikes AI adviser and investor Allie K. Miller as a likely outcome. “I can imagine a future in which everyone — celebrities, TV characters, influencers, your brother — has an online avatar that they invite their audience or friends to engage with. … With the accessibility of these models, I’m not surprised it’s expanding to scaled interpersonal relationships.”
Javier E

Opinion | The Apps on My Phone Are Stalking Me - The New York Times - 0 views

  • There is much about the future that keeps me up at night — A.I. weaponry, undetectable viral deepfakes
  • but in the last few years, one technological threat has blipped my fear radar much faster than others.That fear? Ubiquitous surveillance.
  • I am no longer sure that human civilization can undo or evade living under constant, extravagantly detailed physical and even psychic surveillance
  • ...24 more annotations...
  • as a species, we are not doing nearly enough to avoid always being watched or otherwise digitally recorded.
  • our location, your purchases, video and audio from within your home and office, your online searches and every digital wandering, biometric tracking of your face and other body parts, your heart rate and other vital signs, your every communication, recording, and perhaps your deepest thoughts or idlest dreams
  • in the future, if not already, much of this data and more will be collected and analyzed by some combination of governments and corporations, among them a handful of megacompanies whose powers nearly match those of governments
  • Over the last year, as part of Times Opinion’s Privacy Project, I’ve participated in experiments in which my devices were closely monitored in order to determine the kind of data that was being collected about me.
  • I’ve realized how blind we are to the kinds of insights tech companies are gaining about us through our gadgets. Our blindness not only keeps us glued to privacy-invading tech
  • it also means that we’ve failed to create a political culture that is in any way up to the task of limiting surveillance.
  • few of our cultural or political institutions are even much trying to tamp down the surveillance state.
  • Yet the United States and other supposedly liberty-loving Western democracies have not ruled out such a future
  • like Barack Obama before him, Trump and the Justice Department are pushing Apple to create a backdoor into the data on encrypted iPhones — they want the untrustworthy F.B.I. and any local cop to be able to see everything inside anyone’s phone.
  • the fact that both Obama and Trump agreed on the need for breaking iPhone encryption suggests how thoroughly political leaders across a wide spectrum have neglected privacy as a fundamental value worthy of protection.
  • Americans are sleepwalking into a future nearly as frightening as the one the Chinese are constructing. I choose the word “sleepwalking” deliberately, because when it comes to digital privacy, a lot of us prefer the comfortable bliss of ignorance.
  • Among other revelations: Advertising companies and data brokers are keeping insanely close tabs on smartphones’ location data, tracking users so precisely that their databases could arguably compromise national security or political liberty.
  • Tracking technologies have become cheap and widely available — for less than $100, my colleagues were able to identify people walking by surveillance cameras in Bryant Park in Manhattan.
  • The Clearview AI story suggests another reason to worry that our march into surveillance has become inexorable: Each new privacy-invading technology builds on a previous one, allowing for scary outcomes from new integrations and collections of data that few users might have anticipated.
  • The upshot: As the location-tracking apps followed me, I was able to capture the pings they sent to online servers — essentially recording their spying
  • On the map, you can see the apps are essentially stalking me. They see me drive out one morning to the gas station, then to the produce store, then to Safeway; later on I passed by a music school, stopped at a restaurant, then Whole Foods.
  • But location was only one part of the data the companies had about me; because geographic data is often combined with other personal information — including a mobile advertising ID that can help merge what you see and do online with where you go in the real world — the story these companies can tell about me is actually far more detailed than I can tell about myself.
  • I can longer pretend I’ve got nothing to worry about. Sure, I’m not a criminal — but do I want anyone to learn everything about me?
  • more to the point: Is it wise for us to let any entity learn everything about everyone?
  • The remaining uncertainty about the surveillance state is not whether we will submit to it — only how readily and completely, and how thoroughly it will warp our society.
  • Will we allow the government and corporations unrestricted access to every bit of data we ever generate, or will we decide that some kinds of collections, like the encrypted data on your phone, should be forever off limits, even when a judge has issued a warrant for it?
  • In the future, will there be room for any true secret — will society allow any unrecorded thought or communication to evade detection and commercial analysis?
  • How completely will living under surveillance numb creativity and silence radical thought?
  • Can human agency survive the possibility that some companies will know more about all of us than any of us can ever know about ourselves?
Javier E

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times - 0 views

  • Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions
  • Some are concerned about far-future risks that sound like science fiction.
  • Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now.
  • ...31 more annotations...
  • Some are motivated by potential business revenue, others by national security concerns.
  • Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I.
  • you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
  • It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
  • Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions
  • One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics.
  • The Doomsayers
  • These are the A.I. safety people, and their ranks include the “Godfathers of A.I.,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind
  • Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future
  • Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
  • The technology historian David C. Brock calls these fears “wishful worries” — that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”
  • OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups
  • the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course.
  • While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns.
  • The Reformers
  • While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower
  • Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
  • Propagators of these A.I. ethics concerns — like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy O’Neil — have been raising the alarm on inequities coded into A.I. for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q.
  • Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside — or even above — their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards
  • reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity.
  • Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
  • Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
  • Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security.
  • Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
  • they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
  • U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”
  • The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly.
  • As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism
  • Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
  • By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.
  • Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
Javier E

The Israel-Hamas War Shows Just How Broken Social Media Has Become - The Atlantic - 0 views

  • major social platforms have grown less and less relevant in the past year. In response, some users have left for smaller competitors such as Bluesky or Mastodon. Some have simply left. The internet has never felt more dense, yet there seem to be fewer reliable avenues to find a signal in all the noise. One-stop information destinations such as Facebook or Twitter are a thing of the past. The global town square—once the aspirational destination that social-media platforms would offer to all of us—lies in ruins, its architecture choked by the vines and tangled vegetation of a wild informational jungle
  • Musk has turned X into a deepfake version of Twitter—a facsimile of the once-useful social network, altered just enough so as to be disorienting, even terrifying.
  • At the same time, Facebook’s user base began to erode, and the company’s transparency reports revealed that the most popular content circulating on the platform was little more than viral garbage—a vast wasteland of CBD promotional content and foreign tabloid clickbait.
  • ...4 more annotations...
  • What’s left, across all platforms, is fragmented. News and punditry are everywhere online, but audiences are siloed; podcasts are more popular than ever, and millions of younger people online have turned to influencers and creators on Instagram and especially TikTok as trusted sources of news.
  • Social media, especially Twitter, has sometimes been an incredible news-gathering tool; it has also been terrible and inefficient, a game of do your own research that involves batting away bullshit and parsing half truths, hyperbole, outright lies, and invaluable context from experts on the fly. Social media’s greatest strength is thus its original sin: These sites are excellent at making you feel connected and informed, frequently at the expense of actually being informed.
  • At the center of these pleas for a Twitter alternative is a feeling that a fundamental promise has been broken. In exchange for our time, our data, and even our well-being, we uploaded our most important conversations onto platforms designed for viral advertising—all under the implicit understanding that social media could provide an unparalleled window to the world.
  • What comes next is impossible to anticipate, but it’s worth considering the possibility that the centrality of social media as we’ve known it for the past 15 years has come to an end—that this particular window to the world is being slammed shut.
Javier E

'Meta-Content' Is Taking Over the Internet - The Atlantic - 0 views

  • Jenn, however, has complicated things by adding an unexpected topic to her repertoire: the dangers of social media. She recently spoke about disengaging from it for her well-being; she also posted an Instagram Story about the risks of ChatGPT
  • and, in none other than a YouTube video, recommended Neil Postman’s Amusing Ourselves to Death, a seminal piece of media critique from 1985 that denounces television’s reduction of life to entertainment.
  • (Her other book recommendations included Stolen Focus, by Johann Hari, and Recapture the Rapture, by Jamie Wheal.)
  • ...14 more annotations...
  • Social-media platforms are “preying on your insecurities; they’re preying on your temptations,” Jenn explained to me in an interview that shifted our parasocial connection, at least for an hour, to a mere relationship. “And, you know, I do play a role in this.” Jenn makes money through aspirational advertising, after all—a familiar part of any influencer’s job.
  • She’s pro–parasocial relationships, she explains to the camera, but only if we remain aware that we’re in one. “This relationship does not replace existing friendships, existing relationships,” she emphasizes. “This is all supplementary. Like, it should be in addition to your life, not a replacement.” I sat there watching her talk about parasocial relationships while absorbing the irony of being in one with her.
  • The open acknowledgment of social media’s inner workings, with content creators exposing the foundations of their content within the content itself, is what Alice Marwick, an associate communications professor at the University of North Carolina at Chapel Hill, described to me as “meta-content.”
  • Meta-content can be overt, such as the vlogger Casey Neistat wondering, in a vlog, if vlogging your life prevents you from being fully present in it;
  • But meta-content can also be subtle: a vlogger walking across the frame before running back to get the camera. Or influencers vlogging themselves editing the very video you’re watching, in a moment of space-time distortion.
  • Viewers don’t seem to care. We keep watching, fully accepting the performance. Perhaps that’s because the rise of meta-content promises a way to grasp authenticity by acknowledging artifice; especially in a moment when artifice is easier to create than ever before, audiences want to know what’s “real” and what isn’
  • “The idea of a space where you can trust no sources, there’s no place to sort of land, everything is put into question, is a very unsettling, unsatisfying way to live.
  • So we continue to search for, as Murray observes, the “agreed-upon things, our basic understandings of what’s real, what’s true.” But when the content we watch becomes self-aware and even self-critical, it raises the question of whether we can truly escape the machinations of social media. Maybe when we stare directly into the abyss, we begin to enjoy its company.
  • “The difference between BeReal and the social-media giants isn’t the former’s relationship to truth but the size and scale of its deceptions.” BeReal users still angle their camera and wait to take their daily photo at an aesthetic time of day. The snapshots merely remind us how impossible it is to stop performing online.
  • Jenn’s concern over the future of the internet stems, in part, from motherhood. She recently had a son, Lennon (whose first birthday party I watched on YouTube), and worries about the digital world he’s going to inherit.
  • Back in the age of MySpace, she had her own internet friends and would sneak out to parking lots at 1 a.m. to meet them in real life: “I think this was when technology was really used as a tool to connect us.” Now, she explained, it’s beginning to ensnare us. Posting content online is no longer a means to an end so much as the end itself.
  • We used to view influencers’ lives as aspirational, a reality that we could reach toward. Now both sides acknowledge that they’re part of a perfect product that the viewer understands is unattainable and the influencer acknowledges is not fully real.
  • “I forgot to say this to her in the interview, but I truly think that my videos are less about me and more of a reflection of where you are currently … You are kind of reflecting on your own life and seeing what resonates [with] you, and you’re discarding what doesn’t. And I think that’s what’s beautiful about it.”
  • meta-content is fundamentally a compromise. Recognizing the delusion of the internet doesn’t alter our course within it so much as remind us how trapped we truly are—and how we wouldn’t have it any other way.
1 - 9 of 9
Showing 20 items per page