Skip to main content

Home/ TOK Friends/ Group items tagged bots

Rss Feed Group items tagged

Javier E

FaceApp helped a middle-aged man become a popular younger woman. His fan base has never... - 1 views

  • Soya’s fame illustrated a simple truth: that social media is less a reflection of who we are, and more a performance of who we want to be.
  • It also seemed to herald a darker future where our fundamental senses of reality are under siege: The AI that allows anyone to fabricate a face can also be used to harass women with “deepfake” pornography, invent fraudulent LinkedIn personas and digitally impersonate political enemies.
  • As the photos began receiving hundreds of likes, Soya’s personality and style began to come through. She was relentlessly upbeat. She never sneered or bickered or trolled. She explored small towns, savored scenic vistas, celebrated roadside restaurants’ simple meals.
  • ...25 more annotations...
  • She took pride in the basic things, like cleaning engine parts. And she only hinted at the truth: When one fan told her in October, “It’s great to be young,” Soya replied, “Youth does not mean a certain period of life, but how to hold your heart.”
  • She seemed, well, happy, and FaceApp had made her that way. Creating the lifelike impostor had taken only a few taps: He changed the “Gender” setting to “Female,” the “Age” setting to “Teen,” and the “Impression” setting — a mix of makeup filters — to a glamorous look the app calls “Hollywood.”
  • Soya pouted and scowled on rare occasions when Nakajima himself felt frustrated. But her baseline expression was an extra-wide smile, activated with a single tap.
  • Nakajima grew his shimmering hair below his shoulders and raided his local convenience store for beauty supplies he thought would make the FaceApp images more convincing: blushes, eyeliners, concealers, shampoos.
  • “When I compare how I feel when I started to tweet as a woman and now, I do feel that I’m gradually gravitating toward this persona … this fantasy world that I created,” Nakajima said. “When I see photos of what I tweeted, I feel like, ‘Oh. That’s me.’ ”
  • The sensation Nakajima was feeling is so common that there’s a term for it: the Proteus effect, named for the shape-shifting Greek god. Stanford University researchers first coined it in 2007 to describe how people inhabiting the body of a digital avatar began to act the part
  • People made to appear taller in virtual-reality simulations acted more assertively, even after the experience ended. Prettier characters began to flirt.
  • What is it about online disguises? Why are they so good at bending people’s sense of self-perception?
  • they tap into this “very human impulse to play with identity and pretend to be someone you’re not.”
  • Users in the Internet’s early days rarely had any presumptions of authenticity, said Melanie C. Green, a University of Buffalo professor who studies technology and social trust. Most people assumed everyone else was playing a character clearly distinguished from their real life.
  • “This identity play was considered one of the huge advantages of being online,” Green said. “You could switch your gender and try on all of these different personas. It was a playground for people to explore.”
  • It wasn’t until the rise of giant social networks like Facebook — which used real identities to, among other things, supercharge targeted advertising — that this big game of pretend gained an air of duplicity. Spaces for playful performance shrank, and the biggest Internet watering holes began demanding proof of authenticity as a way to block out malicious intent.
  • The Web’s big shift from text to visuals — the rise of photo-sharing apps, live streams and video calls — seemed at first to make that unspoken rule of real identities concrete. It seemed too difficult to fake one’s appearance when everyone’s face was on constant display.
  • Now, researchers argue, advances in image-editing artificial intelligence have done for the modern Internet what online pseudonyms did for the world’s first chat rooms. Facial filters have allowed anyone to mold themselves into the character they want to play.
  • researchers fear these augmented reality tools could end up distorting the beauty standards and expectations of actual reality.
  • Some political and tech theorists worry this new world of synthetic media threatens to detonate our concept of truth, eroding our shared experiences and infusing every online relationship with suspicion and self-doubt.
  • Deceptive political memes, conspiracy theories, anti-vaccine hoaxes and other scams have torn the fabric of our democracy, culture and public health.
  • But she also thinks about her kids, who assume “that everything online is fabricated,” and wonders whether the rules of online identity require a bit more nuance — and whether that generational shift is already underway.
  • “Bots pretending to be people, automated representations of humanity — that, they perceive as exploitative,” she said. “But if it’s just someone engaging in identity experimentation, they’re like: ‘Yeah, that’s what we’re all doing.'
  • To their generation, “authenticity is not about: ‘Does your profile picture match your real face?’ Authenticity is: ‘Is your voice your voice?’
  • “Their feeling is: ‘The ideas are mine. The voice is mine. The content is mine. I’m just looking for you to receive it without all the assumptions and baggage that comes with it.’ That’s the essence of a person’s identity. That’s who they really are.”
  • But wasn’t this all just a big con? Nakajima had tricked people with a “cool girl” stereotype to boost his Twitter numbers. He hadn’t elevated the role of women in motorcycling; if anything, he’d supplanted them. And the character he’d created was paper thin: Soya had no internal complexity outside of what Nakajima had projected, just that eternally superimposed smile.
  • Perhaps he should have accepted his irrelevance and faded into the digital sunset, sharing his life for few to see. But some of Soya’s followers have said they never felt deceived: It was Nakajima — his enthusiasm, his attitude about life — they’d been charmed by all along. “His personality,” as one Twitter follower said, “shined through.”
  • In Nakajima’s mind, he’d used the tools of a superficial medium to craft genuine connections. He had not felt real until he had become noticed for being fake.
  • Nakajima said he doesn’t know how long he’ll keep Soya alive. But he said he’s grateful for the way she helped him feel: carefree, adventurous, seen.
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
knudsenlu

Huge MIT Study of 'Fake News': Falsehoods Win on Twitter - The Atlantic - 0 views

  • “Falsehood flies, and the Truth comes limping after it,” Jonathan Swift once wrote.It was hyperbole three centuries ago. But it is a factual description of social media, according to an ambitious and first-of-its-kind study published Thursday in Science.
  • By every common metric, falsehood consistently dominates the truth on Twitter, the study finds: Fake news and false rumors reach more people, penetrate deeper into the social network, and spread much faster than accurate stories.
  • “It seems to be pretty clear [from our study] that false information outperforms true information,” said Soroush Vosoughi, a data scientist at MIT who has studied fake news since 2013 and who led this study. “And that is not just because of bots. It might have something to do with human nature.”
  • ...8 more annotations...
  • A false story is much more likely to go viral than a real story, the authors find. A false story reaches 1,500 people six times quicker, on average, than a true story does.
  • “In short, I don’t think there’s any reason to doubt the study’s results,” said Rebekah Tromble, a professor of political science at Leiden University in the Netherlands, in an email.
  • It’s a question that can have life-or-death consequences.“[Fake news] has become a white-hot political and, really, cultural topic, but the trigger for us was personal events that hit Boston five years ago,” said Deb Roy, a media scientist at MIT and one of the authors of the new study.
  • Ultimately, they found about 126,000 tweets, which, together, had been retweeted more than 4.5 million times. Some linked to “fake” stories hosted on other websites. Some started rumors themselves, either in the text of a tweet or in an attached image. (The team used a special program that could search for words contained within static tweet images.) And some contained true information or linked to it elsewhere.
  • Tweet A and Tweet B both have the same size audience, but Tweet B has more “depth,” to use Vosoughi’s term. It chained together retweets, going viral in a way that Tweet A never did. “It could reach 1,000 retweets, but it has a very different shape,” he said.Here’s the thing: Fake news dominates according to both metrics. It consistently reaches a larger audience, and it tunnels much deeper into social networks than real news does. The authors found that accurate news wasn’t able to chain together more than 10 retweets. Fake news could put together a retweet chain 19 links long—and do it 10 times as fast as accurate news put together its measly 10 retweets.
  • What does this look like in real life? Take two examples from the last presidential election. In August 2015, a rumor circulated on social media that Donald Trump had let a sick child use his plane to get urgent medical care. Snopes confirmed almost all of the tale as true. But according to the team’s estimates, only about 1,300 people shared or retweeted the story.
  • Why does falsehood do so well? The MIT team settled on two hypotheses.First, fake news seems to be more “novel” than real news. Falsehoods are often notably different from the all the tweets that have appeared in a user’s timeline 60 days prior to their retweeting them, the team found.Second, fake news evokes much more emotion than the average tweet. The researchers created a database of the words that Twitter users used to reply to the 126,000 contested tweets, then analyzed it with a state-of-the-art sentiment-analysis tool. Fake tweets tended to elicit words associated with surprise and disgust, while accurate tweets summoned words associated with sadness and trust, they found.
  • It suggests—to me, at least, a Twitter user since 2007, and someone who got his start in journalism because of the social network—that social-media platforms do not encourage the kind of behavior that anchors a democratic government. On platforms where every user is at once a reader, a writer, and a publisher, falsehoods are too seductive not to succeed: The thrill of novelty is too alluring, the titillation of disgust too difficult to transcend. After a long and aggravating day, even the most staid user might find themselves lunging for the politically advantageous rumor. Amid an anxious election season, even the most public-minded user might subvert their higher interest to win an argument.
sanderk

How YouTube's Recommendation Algorithm Really Works - The Atlantic - 0 views

  • YouTube wants to recommend things people will like, and the clearest signal of that is whether other people liked them. Pew found that 64 percent of recommendations went to videos with more than a million views. The 50 videos that YouTube recommended most often had been viewed an average of 456 million times each. Popularity begets popularity, at least in the case of users (or bots, as here) that YouTube doesn’t know much about.
  • So, the challenge becomes how to recommend “new videos that users want to watch” when those videos are new to the system and low in views. (Finding fresh, potentially hot videos is important, YouTube researchers have written, for “propagating viral content.”)
  • The system learns from a video’s early performance, and if it does well, views can grow rapidly. In one case, a highly recommended kids’ video went from 34,000 views when Pew first encountered it in July to 30 million in August.
  • ...4 more annotations...
  • First, as Pew’s software made choices, the system selected longer videos. It’s as if the software recognizes that the user is going to be around for a while, and starts to serve up longer fare. Second, it also began to recommend more popular videos regardless of how popular the starting video was.
  • more than 70 percent of the videos that YouTube recommended showed up on the list only once. It’s impossible to examine how hundreds of thousands of videos connect to each first random video when there are such limited data about each one.
  • People want to know if YouTube regularly radicalizes people with its recommendations, as the scholar Zeynep Tufekci has suggested. This study suggests that YouTube pushes an anonymous user toward more popular, not more fringe, content.
  • For my November magazine story about children’s YouTube, the company’s answer to these kinds of troubling suggestions was that YouTube isn’t for kids. Children, they told me, should be using only the YouTube Kids app, which has been built as a safe space for them
Javier E

In the Battle Between Bots and Comedians, A.I. Is Killing - The New York Times - 1 views

  • Tony Veale, a computer scientist who wrote a book on comedy and A.I., “Your Wit Is My Command,” is impressed with new large-language models’ ability to imitate genre and voice, analyze and generate metaphors, explain itself and even admit mistakes. He’s bullish on computers making professional-level jokes in five years and when asked about originality responded that A.I.’s process isn’t any different from that of young artists. “Many comedians, such as Eddie Murphy and Jerry Seinfeld, trained themselves by repeatedly listening to and repeating Bill Cosby’s early comedy albums,” he wrote in an email. “We all learn from those we aim to emulate and transcend.”
  • Plus: Much comedy doesn’t get that far past the imitation stage. People like old jokes. Sitcoms and stand-up are often derivative. Topical comedy often leans on formulaic phrasing and predictable rhythms
Javier E

The AI is eating itself - by Casey Newton - Platformer - 0 views

  • there also seems to be little doubt that is corroding the web.
  • , two new studies offered some cause for alarm. (I discovered both in the latest edition of Import AI, the indispensable weekly newsletter from Anthropic co-founder and former journalist Jack Clark.)
  • The first study, which had an admittedly small sample size, found that crowd-sourced workers on Amazon’s Mechanical Turks platforms increasingly admit to using LLMs to perform text-based tasks.
  • ...6 more annotations...
  • Until now, the assumption has been that they will answer truthfully based on their own experiences. In a post-ChatGPT world, though, academics can no longer make that assumption. Given the mostly anonymous, transactional nature of the assignment, it’s easy to imagine a worker signing up to participate in a large number of studies and outsource all their answers to a bot. This “raises serious concerns about the gradual dilution of the ‘human factor’ in crowdsourced text data,” the researchers write.
  • “This, if true, has big implications,” Clark writes. “It suggests the proverbial mines from which companies gather the supposed raw material of human insights are now instead being filled up with counterfeit human intelligence.”
  • A second, more worrisome study comes from researchers at the University of Oxford,  University of Cambridge, University of Toronto, and Imperial College London. It found that training AI systems on data generated by other AI systems — synthetic data, to use the industry’s term — causes models to degrade and ultimately collapse. While the decay can be managed by using synthetic data sparingly, researchers write, the idea that models can be “poisoned” by feeding them their own outputs raises real risks for the web
  • that’s a problem, because — to bring together the threads of today’s newsletter so far — AI output is spreading to encompass more of the web every day.“The obvious larger question,” Clark writes, “is what this does to competition among AI developers as the internet fills up with a greater percentage of generated versus real content.”
  • In The Verge, Vincent argues that the current wave of disruption will ultimately bring some benefits, even if it’s only to unsettle the monoliths that have dominated the web for so long. “Even if the web is flooded with AI junk, it could prove to be beneficial, spurring the development of better-funded platforms, he writes. “If Google consistently gives you garbage results in search, for example, you might be more inclined to pay for sources you trust and visit them directly.”
  • the glut of AI text will leave us with a web where the signal is ever harder to find in the noise. Early results suggest that these fears are justified — and that soon everyone on the internet, no matter their job, may soon find themselves having to exert ever more effort seeking signs of intelligent life.
peterconnelly

Opinion | Elon Musk's Tesla Management Is a Bad Sign for Twitter - The New York Times - 0 views

  • His promises to preserve free speech, ban spam bots and dramatically boost revenue may have earned the blessing of the company’s founder, Jack Dorsey, but with Twitter’s stock falling well below his offer price, Mr. Musk appears to be reneging on a deal that has made even Wall Street grow skeptical.
  • The way that he has managed and marketed his businesses from Tesla’s early days reveals a dysfunction behind the automaker’s veneer of technofuturism and past stock market successes.
  • The way that he has managed and marketed his businesses from Tesla’s early days reveals a dysfunction behind the automaker’s veneer of technofuturism and past stock market successes.
  • ...11 more annotations...
  • he forces his employees to bridge the enormous gap between technological reality and his dreams. This disconnect fosters a negligent and sometimes cruel workplace, to disastrous effect.
  • That fully self-driving announcement that so delighted his fans came as a far more jarring revelation to the project’s engineers, who found out about their staggering new mission when Mr. Musk tweeted about it.
  • This is the fundamental weakness of every organization run as a cult of personality: The dear leader can’t be everywhere or make every decision but often fails to provide the clear code of values that allows managers to independently shape their decisions around common goals.
  • Lawsuits by workers and California’s Department of Fair Employment and Housing allege that Black workers were tasked with menial physical labor in parts of the factory nicknamed “the plantation,” where they were subjected to racist slurs and graffiti.
  • He ultimately gave up and cobbled together a manual-labor-intensive production line in an open-air tent.
  • Female workers have sued, alleging a pervasive culture of sexual harassment and groping by supervisors. Mr. Musk was indifferent, emailing workers who experienced abuse that “it is important to be thick-skinned.”
  • lantatio
  • Mr. Musk’s reliance on hype is especially jarring.
  • By moving to buy Twitter, Mr. Musk has not only added another distraction to his long list but has also already shown the same drive to announce sweeping decisions in public.
  • Ultimately Mr. Musk’s goals for Twitter, as they are for Tesla, are not about making the right decisions for his companies or the people who make them possible.
  • They are about playing to the crowd and burnishing the legend that keeps fresh bodies and minds moving through the businesses that chew them up and spit them out.
  •  
    Elon Musk's management at Tesla and his buying of Twitter
peterconnelly

Meet the Wikipedia editor who published the Buffalo shooting entry minutes after it sta... - 0 views

  • After Jason Moore, from Portland, Oregon, saw headlines from national news sources on Google News about the Buffalo shooting at a local supermarket on Saturday afternoon, he did a quick search for the incident on Wikipedia. When no results appeared, he drafted a single sentence: "On May 14, 2022, 10 people were killed in a mass shooting in Buffalo, New York." He hit save and published the entry on Wikipedia in less than a minute.
  • That article, which as of Friday has been viewed more than 900,000 times, has since undergone 1,071 edits by 223 editors who've voluntarily updated the page on the internet's free and largest crowdsourced encyclopedia.
  • He's credited with creating 50,000 entries
  • ...13 more annotations...
  • In the middle of breaking news, when people are searching for information, some platforms can present more questions than answers. Although Wikipedia is not staffed with professional journalists, it is viewed as an authoritative source by much of the public, for better or for worse. Its entries are also used for fact-checking purposes by some of the biggest social platforms, adding to the stakes and reach of the work from Moore and others.
  • "Editing Wikipedia can absolutely take an emotional toll on me, especially when working on difficult topics such as the COVID-19 pandemic, mass shootings, terrorist attacks, and other disasters," he said.
  • "I like the instant gratification of making the internet better," he said.
  • "I want to direct people to something that is going to provide them with much more reliable information at a time when it's very difficult for people to understand what sources they can trust."
  • "It is considered cool if you're the first person who creates an article, especially if you do it well with high-quality contributions," said Rasberry.
  • To help patrol incoming edits and predict misconduct or errors, Wikipedia -- like Twitter -- uses artificial intelligence bots that can escalate suspicious content to human reviewers who monitor content.
  • Rasberry, who also wrote the Wikipedia page on the platform's fact checking processes, said Wikipedia does not employ paid staff to monitor anything unless it involves "strange and unusual serious crimes like terrorism or real world violence, such as using Wikipedia to make threats, plan to commit suicide, or when Wikipedia itself is part of a crime.
  • Rasberry said flaws range from a geographical bias, which is related to challenges with communicating across languages; access to internet in lower and middle income countries; and barriers to freedom of journalism around the world.
  • "I've got many other editors that I'm working with who will back me, so when we encounter vandalism or trolls or misinformation or disinformation, editors are very quick to revert inappropriate edits or remove inappropriate content or poorly sourced content," Moore said.
  • While "edit wars" can happen on pages, Rasberry said this tends to occur more often over social issues rather than news.
  • Wikipedia also publicly displays who edits each version of an article via its history page, along with a "talk" page for each post that allows editors to openly discuss edits.
  • "If no reliable sources can be found on a topic, Wikipedia should not have an article on it," the page said.
  • "If it was a paid advertising site or if it had a different mission, I wouldn't waste my time."
Javier E

Opinion | What College Students Need Is a Taste of the Monk's Life - The New York Times - 0 views

  • When she registered last fall for the seminar known around campus as the monk class, she wasn’t sure what to expect.
  • “You give up technology, and you can’t talk for a month,” Ms. Rodriguez told me. “That’s all I’d heard. I didn’t know why.” What she found was a course that challenges students to rethink the purpose of education, especially at a time when machine learning is getting way more press than the human kind.
  • Each week, students would read about a different monastic tradition and adopt some of its practices. Later in the semester, they would observe a one-month vow of silence (except for discussions during Living Deliberately) and fast from technology, handing over their phones to him.
  • ...50 more annotations...
  • Yes, he knew they had other classes, jobs and extracurriculars; they could make arrangements to do that work silently and without a computer.
  • The class eased into the vow of silence, first restricting speech to 100 words a day. Other rules began on Day 1: no jewelry or makeup in class. Men and women sat separately and wore different “habits”: white shirts for the men, women in black. (Nonbinary and transgender students sat with the gender of their choice.)
  • Dr. McDaniel discouraged them from sharing personal information; they should get to know one another only through ideas. “He gave us new names, based on our birth time and day, using a Thai birth chart,”
  • “We were practicing living a monastic life. We had to wake up at 5 a.m. and journal every 30 minutes.”
  • If you tried to cruise to a C, you missed the point: “I realized the only way for me to get the most out of this class was to experience it all,” she said. (She got Dr. McDaniel’s permission to break her vow of silence in order to talk to patients during her clinical rotation.)
  • Dr. McDaniel also teaches a course called Existential Despair. Students meet once a week from 5 p.m. to midnight in a building with comfy couches, turn over their phones and curl up to read an assigned novel (cover to cover) in one sitting — books like James Baldwin’s “Giovanni’s Room” and José Saramago’s “Blindness.” Then they stay up late discussing it.
  • The course is not about hope, overcoming things, heroic stories,” Dr. McDaniel said. Many of the books “start sad. In the middle they’re sad. They stay sad. I’m not concerned with their 20-year-old self. I’m worried about them at my age, dealing with breast cancer, their dad dying, their child being an addict, a career that never worked out — so when they’re dealing with the bigger things in life, they know they’re not alone.”
  • Both courses have long wait lists. Students are hungry for a low-tech, introspective experience —
  • Research suggests that underprivileged young people have far fewer opportunities to think for unbroken stretches of time, so they may need even more space in college to develop what social scientists call cognitive endurance.
  • Yet the most visible higher ed trends are moving in the other direction
  • Rather than ban phones and laptops from class, some professors are brainstorming ways to embrace students’ tech addictions with class Facebook and Instagram accounts, audience response apps — and perhaps even including the friends and relatives whom students text during class as virtual participants in class discussion.
  • Then there’s that other unwelcome classroom visitor: artificial intelligence.
  • stop worrying and love the bot by designing assignments that “help students develop their prompting skills” or “use ChatGPT to generate a first draft,” according to a tip sheet produced by the Center for Teaching and Learning at Washington University in St. Louis.
  • It’s not at all clear that we want a future dominated by A.I.’s amoral, Cheez Whiz version of human thought
  • It is abundantly clear that texting, tagging and chatbotting are making students miserable right now.
  • One recent national survey found that 60 percent of American college students reported the symptoms of at least one mental health problem and that 15 percent said they were considering suicide
  • A recent meta-analysis of 36 studies of college students’ mental health found a significant correlation between longer screen time and higher risk of anxiety and depression
  • And while social media can sometimes help suffering students connect with peers, research on teenagers and college students suggests that overall, the support of a virtual community cannot compensate for the vortex of gossip, bullying and Instagram posturing that is bound to rot any normal person’s self-esteem.
  • We need an intervention: maybe not a vow of silence but a bold move to put the screens, the pinging notifications and creepy humanoid A.I. chatbots in their proper place
  • it does mean selectively returning to the university’s roots in the monastic schools of medieval Europe and rekindling the old-fashioned quest for meaning.
  • Colleges should offer a radically low-tech first-year program for students who want to apply: a secular monastery within the modern university, with a curated set of courses that ban glowing rectangles of any kind from the classroom
  • Students could opt to live in dorms that restrict technology, too
  • I prophesy that universities that do this will be surprised by how much demand there is. I frequently talk to students who resent the distracting laptops all around them during class. They feel the tug of the “imaginary string attaching me to my phone, where I have to constantly check it,”
  • Many, if not most, students want the elusive experience of uninterrupted thought, the kind where a hash of half-baked notions slowly becomes an idea about the world.
  • Even if your goal is effective use of the latest chatbot, it behooves you to read books in hard copies and read enough of them to learn what an elegant paragraph sounds like. How else will students recognize when ChatGPT churns out decent prose instead of bureaucratic drivel?
  • Most important, students need head space to think about their ultimate values.
  • His course offers a chance to temporarily exchange those unconscious structures for a set of deliberate, countercultural ones.
  • here are the student learning outcomes universities should focus on: cognitive endurance and existential clarity.
  • Contemplation and marathon reading are not ends in themselves or mere vacations from real life but are among the best ways to figure out your own answer to the question of what a human being is for
  • When students finish, they can move right into their area of specialization and wire up their skulls with all the technology they want, armed with the habits and perspective to do so responsibly
  • it’s worth learning from the radicals. Dr. McDaniel, the religious studies professor at Penn, has a long history with different monastic traditions. He grew up in Philadelphia, educated by Hungarian Catholic monks. After college, he volunteered in Thailand and Laos and lived as a Buddhist monk.
  • e found that no amount of academic reading could help undergraduates truly understand why “people voluntarily take on celibacy, give up drinking and put themselves under authorities they don’t need to,” he told me. So for 20 years, he has helped students try it out — and question some of their assumptions about what it means to find themselves.
  • “On college campuses, these students think they’re all being individuals, going out and being wild,” he said. “But they’re in a playpen. I tell them, ‘You know you’ll be protected by campus police and lawyers. You have this entire apparatus set up for you. You think you’re being an individual, but look at your four friends: They all look exactly like you and sound like you. We exist in these very strict structures we like to pretend don’t exist.’”
  • Colleges could do all this in classes integrated with general education requirements: ideally, a sequence of great books seminars focused on classic texts from across different civilizations.
  • “For the last 1,500 years, Benedictines have had to deal with technology,” Placid Solari, the abbot there, told me. “For us, the question is: How do you use the tool so it supports and enhances your purpose or mission and you don’t get owned by it?”
  • for novices at his monastery, “part of the formation is discipline to learn how to control technology use.” After this initial time of limited phone and TV “to wean them away from overdependence on technology and its stimulation,” they get more access and mostly make their own choices.
  • Evan Lutz graduated this May from Belmont Abbey with a major in theology. He stressed the special Catholic context of Belmont’s resident monks; if you experiment with monastic practices without investigating the whole worldview, it can become a shallow kind of mindfulness tourism.
  • The monks at Belmont Abbey do more than model contemplation and focus. Their presence compels even non-Christians on campus to think seriously about vocation and the meaning of life. “Either what the monks are doing is valuable and based on something true, or it’s completely ridiculous,” Mr. Lutz said. “In both cases, there’s something striking there, and it asks people a question.”
  • Pondering ultimate questions and cultivating cognitive endurance should not be luxury goods.
  • David Peña-Guzmán, who teaches philosophy at San Francisco State University, read about Dr. McDaniel’s Existential Despair course and decided he wanted to create a similar one. He called it the Reading Experiment. A small group of humanities majors gathered once every two weeks for five and a half hours in a seminar room equipped with couches and a big round table. They read authors ranging from Jean-Paul Sartre to Frantz Fanon
  • “At the beginning of every class I’d ask students to turn off their phones and put them in ‘the Basket of Despair,’ which was a plastic bag,” he told me. “I had an extended chat with them about accessibility. The point is not to take away the phone for its own sake but to take away our primary sources of distraction. Students could keep the phone if they needed it. But all of them chose to part with their phones.”
  • Dr. Peña-Guzmán’s students are mostly working-class, first-generation college students. He encouraged them to be honest about their anxieties by sharing his own: “I said, ‘I’m a very slow reader, and it’s likely some or most of you will get further in the text than me because I’m E.S.L. and read quite slowly in English.’
  • For his students, the struggle to read long texts is “tied up with the assumption that reading can happen while multitasking and constantly interacting with technologies that are making demands on their attention, even at the level of a second,”
  • “These draw you out of the flow of reading. You get back to the reading, but you have to restart the sentence or even the paragraph. Often, because of these technological interventions into the reading experience, students almost experience reading backward — as constant regress, without any sense of progress. The more time they spend, the less progress they make.”
  • Dr. Peña-Guzmán dismissed the idea that a course like his is suitable only for students who don’t have to worry about holding down jobs or paying off student debt. “I’m worried by this assumption that certain experiences that are important for the development of personality, for a certain kind of humanistic and spiritual growth, should be reserved for the elite, especially when we know those experiences are also sources of cultural capital,
  • Courses like the Reading Experiment are practical, too, he added. “I can’t imagine a field that wouldn’t require some version of the skill of focused attention.”
  • The point is not to reject new technology but to help students retain the upper hand in their relationship with i
  • Ms. Rodriguez said that before she took Living Deliberately and Existential Despair, she didn’t distinguish technology from education. “I didn’t think education ever went without technology. I think that’s really weird now. You don’t need to adapt every piece of technology to be able to learn better or more,” she said. “It can form this dependency.”
  • The point of college is to help students become independent humans who can choose the gods they serve and the rules they follow rather than allow someone else to choose for them
  • The first step is dethroning the small silicon idol in their pocket — and making space for the uncomfortable silence and questions that follow
Javier E

Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations - The New York Times - 0 views

  • “I did not comprehend that ChatGPT could fabricate cases,” he told Judge Castel.
  • At times during the hearing, Mr. Schwartz squeezed his eyes shut and rubbed his forehead with his left hand. He stammered and his voice dropped. He repeatedly tried to explain why he did not conduct further research into the cases that ChatGPT had provided to him.
  • For nearly two hours Thursday, Mr. Schwartz was grilled by a judge in a hearing ordered after the disclosure that the lawyer had created a legal brief for a case in Federal District Court that was filled with fake judicial opinions and legal citations, all generated by ChatGPT.
  • ...9 more annotations...
  • “I continued to be duped by ChatGPT. It’s embarrassing,” Mr. Schwartz said.
  • As Mr. Schwartz answered the judge’s questions, the reaction in the courtroom, crammed with close to 70 people who included lawyers, law students, law clerks and professors, rippled across the benches. There were gasps, giggles and sighs. Spectators grimaced, darted their eyes around, chewed on pens.
  • “This case has reverberated throughout the entire legal profession,” said David Lat, a legal commentator. “It is a little bit like looking at a car wreck.”
  • The episode, which arose in an otherwise obscure lawsuit, has riveted the tech world, where there has been a growing debate about the dangers — even an existential threat to humanity — posed by artificial intelligence. It has also transfixed lawyers and judges.
  • Avianca asked Judge Castel to dismiss the lawsuit because the statute of limitations had expired. Mr. Mata’s lawyers responded with a 10-page brief citing more than half a dozen court decisions, with names like Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines, in support of their argument that the suit should be allowed to proceed.After Avianca’s lawyers could not locate the cases, Judge Castel ordered Mr. Mata’s lawyers to provide copies. They submitted a compendium of decisions.It turned out the cases were not real.
  • Mr. Schwartz, who has practiced law in New York for 30 years, said in a declaration filed with the judge this week that he had learned about ChatGPT from his college-aged children and from articles, but that he had never used it professionally.He told Judge Castel on Thursday that he had believed ChatGPT had greater reach than standard databases.“I heard about this new site, which I falsely assumed was, like, a super search engine,” Mr. Schwartz said.
  • Irina Raicu, who directs the internet ethics program at Santa Clara University, said this week that the Avianca case clearly showed what critics of such models have been saying, “which is that the vast majority of people who are playing with them and using them don’t really understand what they are and how they work, and in particular what their limitations are.”
  • “This case has changed the urgency of it,” Professor Roiphe said. “There’s a sense that this is not something that we can mull over in an academic way. It’s something that has affected us right now and has to be addressed.”
  • In the declaration Mr. Schwartz filed this week, he described how he had posed questions to ChatGPT, and each time it seemed to help with genuine case citations. He attached a printout of his colloquy with the bot, which shows it tossing out words like “sure” and “certainly!”After one response, ChatGPT said cheerily, “I hope that helps!”
Javier E

Google's Relationship With Facts Is Getting Wobblier - The Atlantic - 0 views

  • Misinformation or even disinformation in search results was already a problem before generative AI. Back in 2017, The Outline noted that a snippet once confidently asserted that Barack Obama was the king of America.
  • This is what experts have worried about since ChatGPT first launched: false information confidently presented as fact, without any indication that it could be totally wrong. The problem is “the way things are presented to the user, which is Here’s the answer,” Chirag Shah, a professor of information and computer science at the University of Washington, told me. “You don’t need to follow the sources. We’re just going to give you the snippet that would answer your question. But what if that snippet is taken out of context?”
  • Responding to the notion that Google is incentivized to prevent users from navigating away, he added that “we have no desire to keep people on Google.
  • ...15 more annotations...
  • Pandu Nayak, a vice president for search who leads the company’s search-quality teams, told me that snippets are designed to be helpful to the user, to surface relevant and high-caliber results. He argued that they are “usually an invitation to learn more” about a subject
  • “It’s a strange world where these massive companies think they’re just going to slap this generative slop at the top of search results and expect that they’re going to maintain quality of the experience,” Nicholas Diakopoulos, a professor of communication studies and computer science at Northwestern University, told me. “I’ve caught myself starting to read the generative results, and then I stop myself halfway through. I’m like, Wait, Nick. You can’t trust this.”
  • Nayak said the team focuses on the bigger underlying problem, and whether its algorithm can be trained to address it.
  • If Nayak is right, and people do still follow links even when presented with a snippet, anyone who wants to gain clicks or money through search has an incentive to capitalize on that—perhaps even by flooding the zone with AI-written content.
  • Nayak told me that Google plans to fight AI-generated spam as aggressively as it fights regular spam, and claimed that the company keeps about 99 percent of spam out of search results.
  • The result is a world that feels more confused, not less, as a result of new technology.
  • The Kenya result still pops up on Google, despite viral posts about it. This is a strategic choice, not an error. If a snippet violates Google policy (for example, if it includes hate speech) the company manually intervenes and suppresses it, Nayak said. However, if the snippet is untrue but doesn’t violate any policy or cause harm, the company will not intervene.
  • experts I spoke with had several ideas for how tech companies might mitigate the potential harms of relying on AI in search
  • For starters, tech companies could become more transparent about generative AI. Diakopoulos suggested that they could publish information about the quality of facts provided when people ask questions about important topics
  • They can use a coding technique known as “retrieval-augmented generation,” or RAG, which instructs the bot to cross-check its answer with what is published elsewhere, essentially helping it self-fact-check. (A spokesperson for Google said the company uses similar techniques to improve its output.) They could open up their tools to researchers to stress-test it. Or they could add more human oversight to their outputs, maybe investing in fact-checking efforts.
  • Fact-checking, however, is a fraught proposition. In January, Google’s parent company, Alphabet, laid off roughly 6 percent of its workers, and last month, the company cut at least 40 jobs in its Google News division. This is the team that, in the past, has worked with professional fact-checking organizations to add fact-checks into search results
  • Alex Heath, at The Verge, reported that top leaders were among those laid off, and Google declined to give me more information. It certainly suggests that Google is not investing more in its fact-checking partnerships as it builds its generative-AI tool.
  • Nayak acknowledged how daunting a task human-based fact-checking is for a platform of Google’s extraordinary scale. Fifteen percent of daily searches are ones the search engine hasn’t seen before, Nayak told me. “With this kind of scale and this kind of novelty, there’s no sense in which we can manually curate results.”
  • Creating an infinite, largely automated, and still accurate encyclopedia seems impossible. And yet that seems to be the strategic direction Google is taking.
  • A representative for Google told me that this was an example of a “false premise” search, a type that is known to trip up the algorithm. If she were trying to date me, she argued, she wouldn’t just stop at the AI-generated response given by the search engine, but would click the link to fact-check it.
Javier E

The Chatbots Are Here, and the Internet Industry Is in a Tizzy - The New York Times - 0 views

  • He cleared his calendar and asked employees to figure out how the technology, which instantly provides comprehensive answers to complex questions, could benefit Box, a cloud computing company that sells services that help businesses manage their online data.
  • Mr. Levie’s reaction to ChatGPT was typical of the anxiety — and excitement — over Silicon Valley’s new new thing. Chatbots have ignited a scramble to determine whether their technology could upend the economics of the internet, turn today’s powerhouses into has-beens or create the industry’s next giants.
  • Cloud computing companies are rushing to deliver chatbot tools, even as they worry that the technology will gut other parts of their businesses. E-commerce outfits are dreaming of new ways to sell things. Social media platforms are being flooded with posts written by bots. And publishing companies are fretting that even more dollars will be squeezed out of digital advertising.
  • ...22 more annotations...
  • The volatility of chatbots has made it impossible to predict their impact. In one second, the systems impress by fielding a complex request for a five-day itinerary, making Google’s search engine look archaic. A moment later, they disturb by taking conversations in dark directions and launching verbal assaults.
  • The result is an industry gripped with the question: What do we do now?
  • The A.I. systems could disrupt $100 billion in cloud spending, $500 billion in digital advertising and $5.4 trillion in e-commerce sales,
  • As Microsoft figures out a chatbot business model, it is forging ahead with plans to sell the technology to others. It charges $10 a month for a cloud service, built in conjunction with the OpenAI lab, that provides developers with coding suggestions, among other things.
  • Smaller companies like Box need help building chatbot tools, so they are turning to the giants that process, store and manage information across the web. Those companies — Google, Microsoft and Amazon — are in a race to provide businesses with the software and substantial computing power behind their A.I. chatbots.
  • “The cloud computing providers have gone all in on A.I. over the last few months,
  • “They are realizing that in a few years, most of the spending will be on A.I., so it is important for them to make big bets.”
  • Yusuf Mehdi, the head of Bing, said the company was wrestling with how the new version would make money. Advertising will be a major driver, he said, but the company expects fewer ads than traditional search allows.
  • Google, perhaps more than any other company, has reason to both love and hate the chatbots. It has declared a “code red” because their abilities could be a blow to its $162 billion business showing ads on searches.
  • “The discourse on A.I. is rather narrow and focused on text and the chat experience,” Mr. Taylor said. “Our vision for search is about understanding information and all its forms: language, images, video, navigating the real world.”
  • Sridhar Ramaswamy, who led Google’s advertising division from 2013 to 2018, said Microsoft and Google recognized that their current search business might not survive. “The wall of ads and sea of blue links is a thing of the past,” said Mr. Ramaswamy, who now runs Neeva, a subscription-based search engine.
  • As that underlying tech, known as generative A.I., becomes more widely available, it could fuel new ideas in e-commerce. Late last year, Manish Chandra, the chief executive of Poshmark, a popular online secondhand store, found himself daydreaming during a long flight from India about chatbots building profiles of people’s tastes, then recommending and buying clothes or electronics. He imagined grocers instantly fulfilling orders for a recipe.
  • “It becomes your mini-Amazon,” said Mr. Chandra, who has made integrating generative A.I. into Poshmark one of the company’s top priorities over the next three years. “That layer is going to be very powerful and disruptive and start almost a new layer of retail.”
  • In early December, users of Stack Overflow, a popular social network for computer programmers, began posting substandard coding advice written by ChatGPT. Moderators quickly banned A.I.-generated text
  • t people could post this questionable content far faster than they could write posts on their own, said Dennis Soemers, a moderator for the site. “Content generated by ChatGPT looks trustworthy and professional, but often isn’t,”
  • When websites thrived during the pandemic as traffic from Google surged, Nilay Patel, editor in chief of The Verge, a tech news site, warned publishers that the search giant would one day turn off the spigot. He had seen Facebook stop linking out to websites and foresaw Google following suit in a bid to boost its own business.
  • He predicted that visitors from Google would drop from a third of websites’ traffic to nothing. He called that day “Google zero.”
  • Because chatbots replace website search links with footnotes to answers, he said, many publishers are now asking if his prophecy is coming true.
  • , strategists and engineers at the digital advertising company CafeMedia have met twice a week to contemplate a future where A.I. chatbots replace search engines and squeeze web traffic.
  • The group recently discussed what websites should do if chatbots lift information but send fewer visitors. One possible solution would be to encourage CafeMedia’s network of 4,200 websites to insert code that limited A.I. companies from taking content, a practice currently allowed because it contributes to search rankings.
  • Courts are expected to be the ultimate arbiter of content ownership. Last month, Getty Images sued Stability AI, the start-up behind the art generator tool Stable Diffusion, accusing it of unlawfully copying millions of images. The Wall Street Journal has said using its articles to train an A.I. system requires a license.
  • In the meantime, A.I. companies continue collecting information across the web under the “fair use” doctrine, which permits limited use of material without permission.
‹ Previous 21 - 32 of 32
Showing 20 items per page