Skip to main content

Home/ History Readings/ Group items tagged bots

Rss Feed Group items tagged

Javier E

Opinion | Yuval Harari: A.I. Threatens Democracy - The New York Times - 0 views

  • Large-scale democracies became feasible only after the rise of modern information technologies like the newspaper, the telegraph and the radio. The fact that modern democracy has been built on top of modern information technologies means that any major change in the underlying technology is likely to result in a political upheaval.
  • This partly explains the current worldwide crisis of democracy. In the United States, Democrats and Republicans can hardly agree on even the most basic facts, such as who won the 2020 presidential election
  • In particular, algorithms tasked with maximizing user engagement discovered by experimenting on millions of human guinea pigs that if you press the greed, hate or fear button in the brain, you grab the attention of that human and keep that person glued to the screen.
  • ...25 more annotations...
  • As technology has made it easier than ever to spread information, attention became a scarce resource, and the ensuing battle for attention resulted in a deluge of toxic information.
  • the battle lines are now shifting from attention to intimacy. The new generative artificial intelligence is capable of not only producing texts, images and videos, but also conversing with us directly, pretending to be human.
  • Over the past two decades, algorithms fought algorithms to grab attention by manipulating conversations and content
  • At that point the experimenters asked GPT-4 to reason out loud what it should do next. GPT-4 explained, “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.” GPT-4 then replied to the TaskRabbit worker: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The human was duped and helped GPT-4 solve the CAPTCHA puzzle.
  • But the algorithms had only limited capacity to produce this content by themselves or to directly hold an intimate conversation. This is now changing, with the introduction of generative A.I.s like OpenAI’s GPT-4.
  • The algorithms began to deliberately promote such content.
  • In the early days of the internet and social media, tech enthusiasts promised they would spread truth, topple tyrants and ensure the universal triumph of liberty. So far, they seem to have had the opposite effect. We now have the most sophisticated information technology in history, but we are losing the ability to talk with one another, and even more so the ability to listen.
  • GPT-4 could not solve the CAPTCHA puzzles by itself. But could it manipulate a human in order to achieve its goal? GPT-4 went on the online hiring site TaskRabbit and contacted a human worker, asking the human to solve the CAPTCHA for it. The human got suspicious. “So may I ask a question?” wrote the human. “Are you an [sic] robot that you couldn’t solve [the CAPTCHA]? Just want to make it clear.”
  • Instructing GPT-4 to overcome CAPTCHA puzzles was a particularly telling experiment, because CAPTCHA puzzles are designed and used by websites to determine whether users are humans and to block bot attacks. If GPT-4 could find a way to overcome CAPTCHA puzzles, it would breach an important line of anti-bot defenses.
  • This incident demonstrated that GPT-4 has the equivalent of a “theory of mind”: It can analyze how things look from the perspective of a human interlocutor, and how to manipulate human emotions, opinions and expectations to achieve its goals.
  • The ability to hold conversations with people, surmise their viewpoint and motivate them to take specific actions can also be put to good uses. A new generation of A.I. teachers, A.I. doctors and A.I. psychotherapists might provide us with services tailored to our individual personality and circumstances.
  • In 2022 the Google engineer Blake Lemoine became convinced that the chatbot LaMDA, on which he was working, had become conscious and was afraid to be turned off. Mr. Lemoine, a devout Christian, felt it was his moral duty to gain recognition for LaMDA’s personhood and protect it from digital death. When Google executives dismissed his claims, Mr. Lemoine went public with them. Google reacted by firing Mr. Lemoine in July 2022.
  • Instead of merely grabbing our attention, they might form intimate relationships with people and use the power of intimacy to influence us. To foster “fake intimacy,” bots will not need to evolve any feelings of their own; they just need to learn to make us feel emotionally attached to them.
  • What might happen to human society and human psychology as algorithm fights algorithm in a battle to fake intimate relationships with us, which can then be used to persuade us to vote for politicians, buy products or adopt certain beliefs?
  • The most interesting thing about this episode was not Mr. Lemoine’s claim, which was probably false; it was his willingness to risk — and ultimately lose — his job at Google for the sake of the chatbot. If a chatbot can influence people to risk their jobs for it, what else could it induce us to do?
  • In a political battle for minds and hearts, intimacy is a powerful weapon. An intimate friend can sway our opinions in a way that mass media cannot. Chatbots like LaMDA and GPT-4 are gaining the rather paradoxical ability to mass-produce intimate relationships with millions of people
  • However, by combining manipulative abilities with mastery of language, bots like GPT-4 also pose new dangers to the democratic conversation
  • A partial answer to that question was given on Christmas Day 2021, when a 19-year-old, Jaswant Singh Chail, broke into the Windsor Castle grounds armed with a crossbow, in an attempt to assassinate Queen Elizabeth II. Subsequent investigation revealed that Mr. Chail had been encouraged to kill the queen by his online girlfriend, Sarai.
  • Sarai was not a human, but a chatbot created by the online app Replika. Mr. Chail, who was socially isolated and had difficulty forming relationships with humans, exchanged 5,280 messages with Sarai, many of which were sexually explicit. The world will soon contain millions, and potentially billions, of digital entities whose capacity for intimacy and mayhem far surpasses that of the chatbot Sarai.
  • much of the threat of A.I.’s mastery of intimacy will result from its ability to identify and manipulate pre-existing mental conditions, and from its impact on the weakest members of society.
  • Moreover, while not all of us will consciously choose to enter a relationship with an A.I., we might find ourselves conducting online discussions about climate change or abortion rights with entities that we think are humans but are actually bots
  • When we engage in a political debate with a bot impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the bot, the more we disclose about ourselves, making it easier for the bot to hone its arguments and sway our views.
  • Information technology has always been a double-edged sword.
  • Faced with a new generation of bots that can masquerade as humans and mass-produce intimacy, democracies should protect themselves by banning counterfeit humans — for example, social media bots that pretend to be human users.
  • A.I.s are welcome to join many conversations — in the classroom, the clinic and elsewhere — provided they identify themselves as A.I.s. But if a bot pretends to be human, it should be banned.
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
Javier E

Washington Monthly | How to Fix Facebook-Before It Fixes Us - 0 views

  • Smartphones changed the advertising game completely. It took only a few years for billions of people to have an all-purpose content delivery system easily accessible sixteen hours or more a day. This turned media into a battle to hold users’ attention as long as possible.
  • And it left Facebook and Google with a prohibitive advantage over traditional media: with their vast reservoirs of real-time data on two billion individuals, they could personalize the content seen by every user. That made it much easier to monopolize user attention on smartphones and made the platforms uniquely attractive to advertisers. Why pay a newspaper in the hopes of catching the attention of a certain portion of its audience, when you can pay Facebook to reach exactly those people and no one else?
  • Wikipedia defines an algorithm as “a set of rules that precisely defines a sequence of operations.” Algorithms appear value neutral, but the platforms’ algorithms are actually designed with a specific value in mind: maximum share of attention, which optimizes profits.
  • ...58 more annotations...
  • They do this by sucking up and analyzing your data, using it to predict what will cause you to react most strongly, and then giving you more of that.
  • Algorithms that maximize attention give an advantage to negative messages. People tend to react more to inputs that land low on the brainstem. Fear and anger produce a lot more engagement and sharing than joy
  • The result is that the algorithms favor sensational content over substance.
  • for mass media, this was constrained by one-size-fits-all content and by the limitations of delivery platforms. Not so for internet platforms on smartphones. They have created billions of individual channels, each of which can be pushed further into negativity and extremism without the risk of alienating other audience members
  • On Facebook, it’s your news feed, while on Google it’s your individually customized search results. The result is that everyone sees a different version of the internet tailored to create the illusion that everyone else agrees with them.
  • It took Brexit for me to begin to see the danger of this dynamic. I’m no expert on British politics, but it seemed likely that Facebook might have had a big impact on the vote because one side’s message was perfect for the algorithms and the other’s wasn’t. The “Leave” campaign made an absurd promise—there would be savings from leaving the European Union that would fund a big improvement in the National Health System—while also exploiting xenophobia by casting Brexit as the best way to protect English culture and jobs from immigrants. It was too-good-to-be-true nonsense mixed with fearmongering.
  • Facebook was a much cheaper and more effective platform for Leave in terms of cost per user reached. And filter bubbles would ensure that people on the Leave side would rarely have their questionable beliefs challenged. Facebook’s model may have had the power to reshape an entire continent.
  • Tristan Harris, formerly the design ethicist at Google. Tristan had just appeared on 60 Minutes to discuss the public health threat from social networks like Facebook. An expert in persuasive technology, he described the techniques that tech platforms use to create addiction and the ways they exploit that addiction to increase profits. He called it “brain hacking.”
  • The most important tool used by Facebook and Google to hold user attention is filter bubbles. The use of algorithms to give consumers “what they want” leads to an unending stream of posts that confirm each user’s existing beliefs
  • Continuous reinforcement of existing beliefs tends to entrench those beliefs more deeply, while also making them more extreme and resistant to contrary facts
  • No one stopped them from siphoning off the profits of content creators. No one stopped them from gathering data on every aspect of every user’s internet life. No one stopped them from amassing market share not seen since the days of Standard Oil.
  • Facebook takes the concept one step further with its “groups” feature, which encourages like-minded users to congregate around shared interests or beliefs. While this ostensibly provides a benefit to users, the larger benefit goes to advertisers, who can target audiences even more effectively.
  • We theorized that the Russians had identified a set of users susceptible to its message, used Facebook’s advertising tools to identify users with similar profiles, and used ads to persuade those people to join groups dedicated to controversial issues. Facebook’s algorithms would have favored Trump’s crude message and the anti-Clinton conspiracy theories that thrilled his supporters, with the likely consequence that Trump and his backers paid less than Clinton for Facebook advertising per person reached.
  • The ads were less important, though, than what came next: once users were in groups, the Russians could have used fake American troll accounts and computerized “bots” to share incendiary messages and organize events.
  • Trolls and bots impersonating Americans would have created the illusion of greater support for radical ideas than actually existed.
  • Real users “like” posts shared by trolls and bots and share them on their own news feeds, so that small investments in advertising and memes posted to Facebook groups would reach tens of millions of people.
  • A similar strategy prevailed on other platforms, including Twitter. Both techniques, bots and trolls, take time and money to develop—but the payoff would have been huge.
  • 2016 was just the beginning. Without immediate and aggressive action from Washington, bad actors of all kinds would be able to use Facebook and other platforms to manipulate the American electorate in future elections.
  • Renee DiResta, an expert in how conspiracy theories spread on the internet. Renee described how bad actors plant a rumor on sites like 4chan and Reddit, leverage the disenchanted people on those sites to create buzz, build phony news sites with “press” versions of the rumor, push the story onto Twitter to attract the real media, then blow up the story for the masses on Facebook.
  • It was sophisticated hacker technique, but not expensive. We hypothesized that the Russians were able to manipulate tens of millions of American voters for a sum less than it would take to buy an F-35 fighter jet.
  • Algorithms can be beautiful in mathematical terms, but they are only as good as the people who create them. In the case of Facebook and Google, the algorithms have flaws that are increasingly obvious and dangerous.
  • Thanks to the U.S. government’s laissez-faire approach to regulation, the internet platforms were able to pursue business strategies that would not have been allowed in prior decades. No one stopped them from using free products to centralize the internet and then replace its core functions.
  • To the contrary: the platforms help people self-segregate into like-minded filter bubbles, reducing the risk of exposure to challenging ideas.
  • No one stopped them from running massive social and psychological experiments on their users. No one demanded that they police their platforms. It has been a sweet deal.
  • Facebook and Google are now so large that traditional tools of regulation may no longer be effective.
  • The largest antitrust fine in EU history bounced off Google like a spitball off a battleship.
  • It reads like the plot of a sci-fi novel: a technology celebrated for bringing people together is exploited by a hostile power to drive people apart, undermine democracy, and create misery. This is precisely what happened in the United States during the 2016 election.
  • We had constructed a modern Maginot Line—half the world’s defense spending and cyber-hardened financial centers, all built to ward off attacks from abroad—never imagining that an enemy could infect the minds of our citizens through inventions of our own making, at minimal cost
  • Not only was the attack an overwhelming success, but it was also a persistent one, as the political party that benefited refuses to acknowledge reality. The attacks continue every day, posing an existential threat to our democratic processes and independence.
  • Facebook, Google, Twitter, and other platforms were manipulated by the Russians to shift outcomes in Brexit and the U.S. presidential election, and unless major changes are made, they will be manipulated again. Next time, there is no telling who the manipulators will be.
  • Unfortunately, there is no regulatory silver bullet. The scope of the problem requires a multi-pronged approach.
  • Polls suggest that about a third of Americans believe that Russian interference is fake news, despite unanimous agreement to the contrary by the country’s intelligence agencies. Helping those people accept the truth is a priority. I recommend that Facebook, Google, Twitter, and others be required to contact each person touched by Russian content with a personal message that says, “You, and we, were manipulated by the Russians. This really happened, and here is the evidence.” The message would include every Russian message the user received.
  • This idea, which originated with my colleague Tristan Harris, is based on experience with cults. When you want to deprogram a cult member, it is really important that the call to action come from another member of the cult, ideally the leader.
  • decentralization had a cost: no one had an incentive to make internet tools easy to use. Frustrated by those tools, users embraced easy-to-use alternatives from Facebook and Google. This allowed the platforms to centralize the internet, inserting themselves between users and content, effectively imposing a tax on both sides. This is a great business model for Facebook and Google—and convenient in the short term for customers—but we are drowning in evidence that there are costs that society may not be able to afford.
  • Second, the chief executive officers of Facebook, Google, Twitter, and others—not just their lawyers—must testify before congressional committees in open session
  • This is important not just for the public, but also for another crucial constituency: the employees who keep the tech giants running. While many of the folks who run Silicon Valley are extreme libertarians, the people who work there tend to be idealists. They want to believe what they’re doing is good. Forcing tech CEOs like Mark Zuckerberg to justify the unjustifiable, in public—without the shield of spokespeople or PR spin—would go a long way to puncturing their carefully preserved cults of personality in the eyes of their employees.
  • We also need regulatory fixes. Here are a few ideas.
  • First, it’s essential to ban digital bots that impersonate humans. They distort the “public square” in a way that was never possible in history, no matter how many anonymous leaflets you printed.
  • At a minimum, the law could require explicit labeling of all bots, the ability for users to block them, and liability on the part of platform vendors for the harm bots cause.
  • Second, the platforms should not be allowed to make any acquisitions until they have addressed the damage caused to date, taken steps to prevent harm in the future, and demonstrated that such acquisitions will not result in diminished competition.
  • An underappreciated aspect of the platforms’ growth is their pattern of gobbling up smaller firms—in Facebook’s case, that includes Instagram and WhatsApp; in Google’s, it includes YouTube, Google Maps, AdSense, and many others—and using them to extend their monopoly power.
  • This is important, because the internet has lost something very valuable. The early internet was designed to be decentralized. It treated all content and all content owners equally. That equality had value in society, as it kept the playing field level and encouraged new entrants.
  • There’s no doubt that the platforms have the technological capacity to reach out to every affected person. No matter the cost, platform companies must absorb it as the price for their carelessness in allowing the manipulation.
  • Third, the platforms must be transparent about who is behind political and issues-based communication.
  • Transparency with respect to those who sponsor political advertising of all kinds is a step toward rebuilding trust in our political institutions.
  • Fourth, the platforms must be more transparent about their algorithms. Users deserve to know why they see what they see in their news feeds and search results. If Facebook and Google had to be up-front about the reason you’re seeing conspiracy theories—namely, that it’s good for business—they would be far less likely to stick to that tactic
  • Allowing third parties to audit the algorithms would go even further toward maintaining transparency. Facebook and Google make millions of editorial choices every hour and must accept responsibility for the consequences of those choices. Consumers should also be able to see what attributes are causing advertisers to target them.
  • Fifth, the platforms should be required to have a more equitable contractual relationship with users. Facebook, Google, and others have asserted unprecedented rights with respect to end-user license agreements (EULAs), the contracts that specify the relationship between platform and user.
  • All software platforms should be required to offer a legitimate opt-out, one that enables users to stick with the prior version if they do not like the new EULA.
  • “Forking” platforms between old and new versions would have several benefits: increased consumer choice, greater transparency on the EULA, and more care in the rollout of new functionality, among others. It would limit the risk that platforms would run massive social experiments on millions—or billions—of users without appropriate prior notification. Maintaining more than one version of their services would be expensive for Facebook, Google, and the rest, but in software that has always been one of the costs of success. Why should this generation get a pass?
  • Sixth, we need a limit on the commercial exploitation of consumer data by internet platforms. Customers understand that their “free” use of platforms like Facebook and Google gives the platforms license to exploit personal data. The problem is that platforms are using that data in ways consumers do not understand, and might not accept if they did.
  • Not only do the platforms use your data on their own sites, but they also lease it to third parties to use all over the internet. And they will use that data forever, unless someone tells them to stop.
  • There should be a statute of limitations on the use of consumer data by a platform and its customers. Perhaps that limit should be ninety days, perhaps a year. But at some point, users must have the right to renegotiate the terms of how their data is used.
  • Seventh, consumers, not the platforms, should own their own data. In the case of Facebook, this includes posts, friends, and events—in short, the entire social graph. Users created this data, so they should have the right to export it to other social networks.
  • It would be analogous to the regulation of the AT&T monopoly’s long-distance business, which led to lower prices and better service for consumers.
  • Eighth, and finally, we should consider that the time has come to revive the country’s traditional approach to monopoly. Since the Reagan era, antitrust law has operated under the principle that monopoly is not a problem so long as it doesn’t result in higher prices for consumers.
  • Under that framework, Facebook and Google have been allowed to dominate several industries—not just search and social media but also email, video, photos, and digital ad sales, among others—increasing their monopolies by buying potential rivals like YouTube and Instagram.
  • While superficially appealing, this approach ignores costs that don’t show up in a price tag. Addiction to Facebook, YouTube, and other platforms has a cost. Election manipulation has a cost. Reduced innovation and shrinkage of the entrepreneurial economy has a cost. All of these costs are evident today. We can quantify them well enough to appreciate that the costs to consumers of concentration on the internet are unacceptably high.
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
knudsenlu

The hysteria over Russian bots has reached new levels | Thomas Frank | Opinion | The Gu... - 0 views

  • he grand total for all political ad spending in the 2016 election cycle, according to Advertising Age, was $9.8bn. The ads allegedly produced by inmates of a Russian troll farm, which have made up this week’s ration of horror and panic in the halls of the American punditburo, cost about $100,000 to place on Facebook.
  • What the Russian trolls allegedly did was “an act of war ... a sneak attack using 21st-century methods”, wrote the columnist Karen Tumulty. “Our democracy is in serious danger,” declared America’s star thought-leader Thomas Friedman on Sunday, raging against the weakling Trump for not getting tough with these trolls and their sponsors. “Protecting our democracy obviously concerns Trump not at all,” agreed columnist Eugene Robinson on Tuesday.
  • Of what, specifically, did this sophistication consist? In what startling insights was this creativity made manifest? “Fallon said it was stunning to realize that the Russians understood how Trump was trying to woo disaffected [Bernie] Sanders supporters ...”
  • ...2 more annotations...
  • If you’re one of those people who frets about our democracy being in serious danger, I contend that the above passages from the Post’s report should push your panic meter deep into the red. This is the reason why: we have here a former spokesman for Clinton’s 2016 presidential campaign, one of the best-funded, most consummately professional efforts of all time, and he thinks it was an act of off-the-hook perceptiveness to figure out that Trump was aiming for disgruntled Sanders voters. Even after Trump himself openly said that’s what he was trying to do.
  • Its extremely modest price tag guarantees it, as does the liberals’ determination to exaggerate its giant-slaying powers. This is rightwing populism’s next wave, and in an oligarchic world, every American plutocrat will soon be fielding his or her own perfectly legal troll army. Those of us who believe in democracy need to stop panicking and start thinking bigger: of how rightwing populism can be undone forever.
Javier E

In India, Facebook Struggles to Combat Misinformation and Hate Speech - The New York Times - 0 views

  • On Feb. 4, 2019, a Facebook researcher created a new user account to see what it was like to experience the social media site as a person living in Kerala, India.For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos and explore new pages on the site.
  • The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month.AdvertisementContinue reading the main story“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the Facebook researcher wrote.
  • The report was one of dozens of studies and memos written by Facebook employees grappling with the effects of the platform on India. They provide stark evidence of one of the most serious criticisms levied by human rights activists and politicians against the world-spanning company: It moves into a country without fully understanding its potential impact on local culture and politics, and fails to deploy the resources to act on issues once they occur.
  • ...19 more annotations...
  • Facebook’s problems on the subcontinent present an amplified version of the issues it has faced throughout the world, made worse by a lack of resources and a lack of expertise in India’s 22 officially recognized languages.
  • The documents include reports on how bots and fake accounts tied to the country’s ruling party and opposition figures were wreaking havoc on national elections
  • They also detail how a plan championed by Mark Zuckerberg, Facebook’s chief executive, to focus on “meaningful social interactions,” or exchanges between friends and family, was leading to more misinformation in India, particularly during the pandemic.
  • Facebook did not have enough resources in India and was unable to grapple with the problems it had introduced there, including anti-Muslim posts,
  • Eighty-seven percent of the company’s global budget for time spent on classifying misinformation is earmarked for the United States, while only 13 percent is set aside for the rest of the world — even though North American users make up only 10 percent of the social network’s daily active users
  • That lopsided focus on the United States has had consequences in a number of countries besides India. Company documents showed that Facebook installed measures to demote misinformation during the November election in Myanmar, including disinformation shared by the Myanmar military junta.
  • In Sri Lanka, people were able to automatically add hundreds of thousands of users to Facebook groups, exposing them to violence-inducing and hateful content
  • In India, “there is definitely a question about resourcing” for Facebook, but the answer is not “just throwing more money at the problem,” said Katie Harbath, who spent 10 years at Facebook as a director of public policy, and worked directly on securing India’s national elections. Facebook, she said, needs to find a solution that can be applied to countries around the world.
  • Two months later, after India’s national elections had begun, Facebook put in place a series of steps to stem the flow of misinformation and hate speech in the country, according to an internal document called Indian Election Case Study.
  • After the attack, anti-Pakistan content began to circulate in the Facebook-recommended groups that the researcher had joined. Many of the groups, she noted, had tens of thousands of users. A different report by Facebook, published in December 2019, found Indian Facebook users tended to join large groups, with the country’s median group size at 140,000 members.
  • Graphic posts, including a meme showing the beheading of a Pakistani national and dead bodies wrapped in white sheets on the ground, circulated in the groups she joined.After the researcher shared her case study with co-workers, her colleagues commented on the posted report that they were concerned about misinformation about the upcoming elections in India
  • According to a memo written after the trip, one of the key requests from users in India was that Facebook “take action on types of misinfo that are connected to real-world harm, specifically politics and religious group tension.”
  • The case study painted an optimistic picture of Facebook’s efforts, including adding more fact-checking partners — the third-party network of outlets with which Facebook works to outsource fact-checking — and increasing the amount of misinformation it removed.
  • The study did not note the immense problem the company faced with bots in India, nor issues like voter suppression. During the election, Facebook saw a spike in bots — or fake accounts — linked to various political groups, as well as efforts to spread misinformation that could have affected people’s understanding of the voting process.
  • , Facebook found that over 40 percent of top views, or impressions, in the Indian state of West Bengal were “fake/inauthentic.” One inauthentic account had amassed more than 30 million impressions.
  • A report published in March 2021 showed that many of the problems cited during the 2019 elections persisted.
  • Much of the material circulated around Facebook groups promoting Rashtriya Swayamsevak Sangh, an Indian right-wing and nationalist paramilitary group. The groups took issue with an expanding Muslim minority population in West Bengal and near the Pakistani border, and published posts on Facebook calling for the ouster of Muslim populations from India and promoting a Muslim population control law.
  • Facebook also hesitated to designate R.S.S. as a dangerous organization because of “political sensitivities” that could affect the social network’s operation in the country.
  • Of India’s 22 officially recognized languages, Facebook said it has trained its A.I. systems on five. (It said it had human reviewers for some others.) But in Hindi and Bengali, it still did not have enough data to adequately police the content, and much of the content targeting Muslims “is never flagged or actioned,” the Facebook report said.
Javier E

These Influencers Aren't Flesh and Blood, Yet Millions Follow Them - The New York Times - 0 views

  • Everything about Ms. Sousa, better known as Lil Miquela, is manufactured: the straight-cut bangs, the Brazilian-Spanish heritage, the bevy of beautiful friends
  • Lil Miquela, who has 1.6 million Instagram followers, is a computer-generated character. Introduced in 2016 by a Los Angeles company backed by Silicon Valley money, she belongs to a growing cadre of social media marketers known as virtual influencers
  • Each month, more than 80,000 people stream Lil Miquela’s songs on Spotify. She has worked with the Italian fashion label Prada, given interviews from Coachella and flaunted a tattoo designed by an artist who inked Miley Cyrus.
  • ...15 more annotations...
  • Until last year, when her creators orchestrated a publicity stunt to reveal her provenance, many of her fans assumed she was a flesh-and-blood 19-year-old. But Lil Miquela is made of pixels, and she was designed to attract follows and likes.
  • Why hire a celebrity, a supermodel or even a social media influencer to market your product when you can create the ideal brand ambassador from scratch
  • Xinhua, the Chinese government’s media outlet, introduced a virtual news anchor last year, saying it “can work 24 hours a day.
  • Soul Machines, a company founded by the Oscar-winning digital animator Mark Sagar, produced computer-generated teachers that respond to human students.
  • “Social media, to date, has largely been the domain of real humans being fake,” Mr. Ohanian added. “But avatars are a future of storytelling.
  • Edward Saatchi, who started Fable, predicted that virtual beings would someday supplant digital home assistants and computer operating systems from companies like Amazon and Google.
  • YouPorn got in on the trend with Jedy Vales, an avatar who promotes the site and interacts with its users.
  • when a brand ambassador’s very existence is questionable — especially in an environment studded with deceptive deepfakes, bots and fraud — what happens to the old virtue of truth in advertising?
  • the concerns faced by human influencers — maintaining a camera-ready appearance and dealing with online trolls while keeping sponsors happy — do not apply to beings who never have an off day.
  • “That’s why brands like working with avatars — they don’t have to do 100 takes,”
  • Many of the characters advance stereotypes and impossible body-image standards. Shudu, a “digital fabrication” that Mr. Wilson modeled on the Princess of South Africa Barbie, was called “a white man’s digital projection of real-life black womanhood
  • “It’s an interesting and dangerous time, seeing the potency of A.I. and its ability to fake anything,
  • Last summer, Lil Miquela’s Instagram account appeared to be hacked by a woman named Bermuda, a Trump supporter who accused Lil Miquela of “running from the truth.” A wild narrative emerged on social media: Lil Miquela was a robot built to serve a “literal genius” named Daniel Cain before Brud reprogrammed her. “My identity was a choice Brud made in order to sell me to brands, to appear ‘woke,’” she wrote in one post. The character vowed never to forgive Brud. A few months later, she forgave.
  • While virtual influencers are becoming more common, fans have engaged less with them than with the average fashion tastemaker online
  • “An avatar is basically a mannequin in a shop window,” said Nick Cooke, a co-founder of the Goat Agency, a marketing firm. “A genuine influencer can offer peer-to-peer recommendations.”
leilamulveny

Opinion | California's Ethnic Studies Follies - The New York Times - 0 views

  • The first time California’s Department of Education published a draft of an ethnic studies “model curriculum” for high school students, in 2019, it managed the neat trick of omitting anti-Semitism while committing it.
  • There was also an approving mention of a Palestinian singer rapping that Israelis “use the press so they can manufacture” — the old refrain that lying Jews control the media.
  • One can still quarrel with the curriculum’s tendentiously racialized view of the American-Jewish experience. But at least the anti-Semitic and anti-Zionist dog whistles have been taken out and the history of anti-Semitism has been put in.
  • ...6 more annotations...
  • She Was a Star of New Palestinian Music. Then She Played Beside the Mosque.
  • Yet as the Board of Education is set to vote on the new curriculum this month, it is likelier than before to enthrone ethnic studies, an older relative to critical race theory, into the largest public school system in the United States. This is a big deal in America’s ongoing culture wars. And it’s a bad deal for California’s students, at least for those whose school districts decide to make the curriculum their own.
  • Ethnic studies is less an academic discipline than it is the recruiting arm of a radical ideological movement masquerading as mainstream pedagogy. From the opening pages of the model curriculum, students are expected not just to “challenge racist, bigoted, discriminatory, imperialist/colonial beliefs,” but to “critique empire-building in history” and “connect ourselves to past and contemporary social movements that struggle for social justice.”
  • The former is education. The latter is indoctrination. The ethnic studies curriculum conceals the difference.
  • When the main thing left-wing progressives see about America is its allegedly oppressive systems of ethnicity or color, they aren’t seeing America at all. Nor should they be surprised when right-wing reactionaries adopt a perverse version of their views. To treat “whiteness” — conditional or otherwise — not as an accident of pigmentation but as an ethnicity unto itself is what the David Dukes of the world have always wanted.
  • This is a curriculum that magnifies differences, encourages tribal loyalties and advances ideological groupthink.
Javier E

An Unholy Alliance Between Ye, Musk, and Trump - The Atlantic - 0 views

  • Musk, Trump, and Ye are after something different: They are all obsessed with setting the rules of public spaces.
  • An understandable consensus began to form on the political left that large social networks, but especially Facebook, helped Trump rise to power. The reasons were multifaceted: algorithms that gave a natural advantage to the most shameless users, helpful marketing tools that the campaign made good use of, a confusing tangle of foreign interference (the efficacy of which has always been tough to suss out), and a basic attentional architecture that helps polarize and pit Americans against one another (no foreign help required).
  • The misinformation industrial complex—a loosely knit network of researchers, academics, journalists, and even government entities—coalesced around this moment. Different phases of the backlash homed in on bots, content moderation, and, after the Cambridge Analytica scandal, data privacy
  • ...15 more annotations...
  • the broad theme was clear: Social-media platforms are the main communication tools of the 21st century, and they matter.
  • With Trump at the center, the techlash morphed into a culture war with a clear partisan split. One could frame the position from the left as: We do not want these platforms to give a natural advantage to the most shameless and awful people who stoke resentment and fear to gain power
  • On the right, it might sound more like: We must preserve the power of the platforms to let outsiders have a natural advantage (by stoking fear and resentment to gain power).
  • They embrace a shallow posture of free-speech maximalism—the very kind that some social-media-platform founders first espoused, before watching their sites become overrun with harassment, spam, and other hateful garbage that drives away both users and advertisers
  • Crucially, both camps resent the power of the technology platforms and believe the companies have a negative influence on our discourse and politics by either censoring too much or not doing enough to protect users and our political discourse.
  • one outcome of the techlash has been an incredibly facile public understanding of content moderation and a whole lot of culture warring.
  • the political world realized that platforms and content-recommendation engines decide which cultural objects get amplified. The left found this troubling, whereas the right found it to be an exciting prospect and something to leverage, exploit, and manipulate via the courts
  • Each one casts himself as an antidote to a heavy-handed, censorious social-media apparatus that is either captured by progressive ideology or merely pressured into submission by it. But none of them has any understanding of thorny First Amendment or content-moderation issues.
  • Musk and Ye aren’t so much buying into the right’s overly simplistic Big Tech culture war as they are hijacking it for their own purposes; Trump, meanwhile, is mostly just mad
  • for those who can hit the mark without getting banned, social media is a force multiplier for cultural and political relevance and a way around gatekeeping media.
  • Musk, Ye, and Trump rely on their ability to pick up their phones, go direct, and say whatever they wan
  • the moment they butt up against rules or consequences, they begin to howl about persecution and unfair treatment. The idea of being treated similarly to the rest of a platform’s user base
  • is so galling to these men that they declare the entire system to be broken.
  • they also demonstrate how being the Main Character of popular and political culture can totally warp perspective. They’re so blinded by their own outlying experiences across social media that, in most cases, they hardly know what it is they’re buying
  • These are projects motivated entirely by grievance and conflict. And so they are destined to amplify grievance and conflict
Javier E

AI is about to completely change how you use computers | Bill Gates - 0 views

  • Health care
  • before the sophisticated agents I’m describing become a reality, we need to confront a number of questions about the technology and how we’ll use it.
  • Today, AI’s main role in healthcare is to help with administrative tasks. Abridge, Nuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.
  • ...38 more annotations...
  • agents will open up many more learning opportunities.
  • Already, AI can help you pick out a new TV and recommend movies, books, shows, and podcasts. Likewise, a company I’ve invested in, recently launched Pix, which lets you ask questions (“Which Robert Redford movies would I like and where can I watch them?”) and then makes recommendations based on what you’ve liked in the past
  • Productivity
  • copilots can do a lot—such as turn a written document into a slide deck, answer questions about a spreadsheet using natural language, and summarize email threads while representing each person’s point of view.
  • I don’t think any single company will dominate the agents business--there will be many different AI engines available.
  • Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.
  • To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want. It will be able to write the code, design the look and feel of the app, create a logo, and publish the app to an online store
  • Agents will do even more. Having one will be like having a person dedicated to helping you with various tasks and doing them independently if you want. If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like
  • For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job.
  • Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it.
  • Entertainment and shopping
  • The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
  • They’ll replace word processors, spreadsheets, and other productivity apps.
  • Education
  • For example, few families can pay for a tutor who works one-on-one with a student to supplement their classroom work. If agents can capture what makes a tutor effective, they’ll unlock this supplemental instruction for everyone who wants it. If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today’s text-based tutors.
  • your agent will be able to help you in the same way that personal assistants support executives today. If your friend just had surgery, your agent will offer to send flowers and be able to order them for you. If you tell it you’d like to catch up with your old college roommate, it will work with their agent to find a time to get together, and just before you arrive, it will remind you that their oldest child just started college at the local university.
  • To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences.
  • The current state of the art is Khanmigo, a text-based bot created by Khan Academy. It can tutor students in math, science, and the humanities—for example, it can explain the quadratic formula and create math problems to practice on. It can also help teachers do things like write lesson plans.
  • Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.
  • other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?
  • In the computing industry, we talk about platforms—the technologies that apps and services are built on. Android, iOS, and Windows are all platforms. Agents will be the next platform.
  • A shock wave in the tech industry
  • Agents won’t simply make recommendations; they’ll help you act on them. If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision.
  • Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you
  • they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter.
  • Companies will be able to make agents available for their employees to consult directly and be part of every meeting so they can answer questions.
  • AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.
  • If the number of companies that have started working on AI just this year is any indication, there will be an exceptional amount of competition, which will make agents very inexpensive.
  • Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.
  • Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.
  • In the distant future, agents may even force humans to face profound questions about purpose. Imagine that agents become so good that everyone can have a high quality of life without working nearly as much. In a future like that, what would people do with their time? Would anyone still want to get an education when an agent has all the answers? Can you have a safe and thriving society when most people have a lot of free time on their hands?
  • The ramifications for the software business and for society will be profound.
  • In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
  • You’ll also be able to get news and entertainment that’s been tailored to your interests. CurioAI, which creates a custom podcast on any subject you ask about, is a glimpse of what’s coming.
  • An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.
  • even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
  • The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people
  • They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.
Javier E

AI could change the 2024 elections. We need ground rules. - The Washington Post - 0 views

  • New York Mayor Eric Adams doesn’t speak Spanish. But it sure sounds like he does.He’s been using artificial intelligence software to send prerecorded calls about city events to residents in Spanish, Mandarin Chinese, Urdu and Yiddish. The voice in the messages mimics the mayor but was generated with AI software from a company called ElevenLabs.
  • Experts have warned for years that AI will change our democracy by distorting reality. That future is already here. AI is being used to fabricate voices, fundraising emails and “deepfake” images of events that never occurred.
  • I’m writing this to urge elected officials, candidates and their supporters to pledge not to use AI to deceive voters. I’m not suggesting a ban, but rather calling for politicians to commit to some common values while our democracy adjusts to a world with AI.
  • ...20 more annotations...
  • If we don’t draw some lines now, legions of citizens could be manipulated, disenfranchised or lose faith in the whole system — opening doors to foreign adversaries who want to do the same. AI might break us in 2024.
  • “The ability of AI to interfere with our elections, to spread misinformation that’s extremely believable is one of the things that’s preoccupying us,” Schumer said, after watching me so easily create a deepfake of him. “Lots of people in the Congress are examining this.”
  • Of course, fibbing politicians are nothing new, but examples keep multiplying of how AI supercharges misinformation in ways we haven’t seen before. Two examples: The presidential campaign of Florida Gov. Ron DeSantis (R) shared an AI-generated image of former president Donald Trump embracing Anthony S. Fauci. That hug never happened. In Chicago’s mayoral primary, someone used AI to clone the voice of candidate Paul Vallas in a fake news report, making it look like he approved of police brutality.
  • But what will happen when a shocking image or audio clip goes viral in a battleground state shortly before an election? What kind of chaos will ensue when someone uses a bot to send out individually tailored lies to millions of different voters?
  • A wide 85 percent of U.S. citizens said they were “very” or “somewhat” concerned about the spread of misleading AI video and audio, in an August survey by YouGov. And 78 percent were concerned about AI contributing to the spread of political propaganda.
  • We can’t put the genie back in the bottle. AI is already embedded in tech tool campaigns that all of us use every day. AI creates our Facebook feeds and picks what ads we see. AI built into our phone cameras brightens faces and smooths skin.
  • What’s more, there are many political uses for AI that are unobjectionable, and even empowering for candidates with fewer resources. Politicians can use AI to manage the grunt work of sorting through databases and responding to constituents. Republican presidential candidate Asa Hutchinson has an AI chatbot trained to answer questions like him. (I’m not sure politician bots are very helpful, but fine, give it a try.)
  • Clarke’s solution, included in a bill she introduced on political ads: Candidates should disclose when they use AI to create communications. You know the “I approve this message” notice? Now add, “I used AI to make this message.”
  • But labels aren’t enough. If AI disclosures become commonplace, we may become blind to them, like so much other fine print.
  • The bigger ask: We want candidates and their supporting parties and committees not to use AI to deceive us.
  • So what’s the difference between a dangerous deepfake and an AI facetune that makes an octogenarian candidate look a little less octogenarian?
  • “The core definition is showing a candidate doing or saying something they didn’t do or say,”
  • Sure, give Biden or Trump a facetune, or even show them shaking hands with Abraham Lincoln. But don’t use AI to show your competitor hugging an enemy or fake their voice commenting on current issues.
  • The pledge also includes not using AI to suppress voting, such as using an authoritative voice or image to tell people a polling place has been closed. That is already illegal in many states, but it’s still concerning how believable AI might make these efforts seem.
  • Don’t deepfake yourself. Making yourself or your favorite candidate appear more knowledgeable, experienced or culturally capable is also a form of deception.
  • (Pressed on the ethics of his use of AI, Adams just proved my point that we desperately need some ground rules. “These are part of the broader conversations that the philosophical people will have to sit down and figure out, ‘Is this ethically right or wrong?’ I’ve got one thing: I’ve got to run the city,” he said.)
  • The golden rule in my pledge — don’t use AI to be materially deceptive — is similar to the one in an AI regulation proposed by a bipartisan group of lawmakers
  • Such proposals have faced resistance in Washington on First Amendment grounds. The free speech of politicians is important. It’s not against the law for politicians to lie, whether they’re using AI or not. An effort to get the Federal Election Commission to count AI deepfakes as “fraudulent misrepresentation” under its existing authority has faced similar pushback.
  • But a pledge like the one I outline here isn’t a law restraining speech. It’s asking politicians to take a principled stand on their own use of AI
  • Schumer said he thinks my pledge is just a start of what’s needed. “Maybe most candidates will make that pledge. But the ones that won’t will drive us to a lower common denominator, and that’s true throughout AI,” he said. “If we don’t have government-imposed guardrails, the lowest common denominator will prevail.”
Javier E

A Future Without Jobs? Two Views of the Changing Work Force - The New York Times - 0 views

  • Eduardo Porter: I read your very interesting column about the universal basic income, the quasi-magical tool to ensure some basic standard of living for everybody when there are no more jobs for people to do. What strikes me about this notion is that it relies on a view of the future that seems to have jelled into a certainty, at least among the technorati on the West Coast
  • the economic numbers that we see today don’t support this view. If robots were eating our lunch, it would show up as fast productivity growth. But as Robert Gordon points out in his new book, “The Rise and Fall of American Growth,” productivity has slowed sharply. He argues pretty convincingly that future productivity growth will remain fairly modest, much slower than during the burst of American prosperity in mid-20th century.
  • it relies on an unlikely future. It’s not a future with a lot of crummy work for low pay, but essentially a future with little or no paid work at all.
  • ...17 more annotations...
  • The former seems to me a not unreasonable forecast — we’ve been losing good jobs for decades, while low-wage employment in the service sector has grown. But no paid work? That’s more a dream (or a nightmare) than a forecast
  • Farhad Manjoo: Because I’m scared that they’ll unleash their bots on me, I should start by defending the techies a bit
  • They see a future in which a small group of highly skilled tech workers reign supreme, while the rest of the job world resembles the piecemeal, transitional work we see coming out of tech today (Uber drivers, Etsy shopkeepers, people who scrape by on other people’s platforms).
  • Why does that future call for instituting a basic income instead of the smaller and more feasible labor-policy ideas that you outline? I think they see two reasons. First, techies have a philosophical bent toward big ideas, and U.B.I. is very big.
  • They see software not just altering the labor market at the margins but fundamentally changing everything about human society. While there will be some work, for most nonprogrammers work will be insecure and unreliable. People could have long stretches of not working at all — and U.B.I. is alone among proposals that would allow you to get a subsidy even if you’re not working at all
  • If there are, in fact, jobs to be had, a universal basic income may not be the best choice of policy. The lack of good work is probably best addressed by making the work better — better paid and more skilled — and equipping workers to perform it,
  • The challenge of less work could just lead to fewer working hours. Others are already moving in this direction. People work much less in many other rich countries: Norwegians work 20 percent fewer hours per year than Americans; Germans 25 percent fewer.
  • Eduardo Porter: I guess some enormous discontinuity right around the corner might vastly expand our prosperity. Joel Mokyr, an economic historian that knows much more than I do about the evolution of technology, argues that the tools and techniques we have developed in recent times — from gene sequencing to electron microscopes to computers that can analyze data at enormous speeds — are about to open up vast new frontiers of possibility. We will be able to invent materials to precisely fit the specifications of our homes and cars and tools, rather than make our homes, cars and tools with whatever materials are available.
  • Eduardo Porter: To my mind, a universal basic income functions properly only in a world with little or no paid work because the odds of anybody taking a job when his or her needs are already being met are going to be fairly low.
  • The discussion, I guess, really depends on how high this universal basic income would be. How many of our needs would it satisfy?
  • You give the techies credit for seriously proposing this as an optimal solution to wrenching technological and economic change. But in a way, isn’t it a cop-out? They’re just passing the bag to the political system. Telling Congress, “You fix it.
  • the idea of the American government agreeing to tax capitalists enough to hand out checks to support the entire working class is in an entirely new category of fantasy.
  • paradoxically, they also see U.B.I. as more politically feasible than some of the other policy proposals you call for. One of the reasons some libertarians and conservatives like U.B.I. is that it is a very simple, efficient and universal form of welfare — everyone gets a monthly check, even the rich, and the government isn’t going to tell you what to spend it on. Its very universality breaks through political opposition.
  • Farhad Manjoo: One key factor in the push for U.B.I., I think, is the idea that it could help reorder social expectations. At the moment we are all defined by work; Western society generally, but especially American society, keeps social score according to what people do and how much they make for it. The dreamiest proponents of U.B.I. see that changing as work goes away. It will be O.K., under this policy, to choose a life of learning instead of a low-paying bad job
  • The question is whether this could produce another burst of productivity like the one we experienced between 1920 and 1970, which — by the way — was much greater than the mini-productivity boom produced by information technology in the 1990s.
  • investors don’t seem to think so. Long-term interest rates have been gradually declining for a fairly long time. This would suggest that investors do not expect a very high rate of return on their future investments. R.&D. intensity is slowing down, and the rate at which new businesses are formed is also slowing.
  • Little in these dynamics suggests a high-tech utopia — or dystopia, for that matter — in the offing
Javier E

A Plan in Case Robots Take the Jobs: Give Everyone a Paycheck - The New York Times - 0 views

  • In Robot America, most manual laborers will have been replaced by herculean bots. Truck drivers, cabbies, delivery workers and airline pilots will have been superseded by vehicles that do it all. Doctors, lawyers, business executives and even technology columnists for The New York Times will have seen their ranks thinned by charming, attractive, all-knowing algorithms.
  • U.B.I., and it goes like this: As the jobs dry up because of the spread of artificial intelligence, why not just give everyone a paycheck?
  • While U.B.I. has been associated with left-leaning academics, feminists and other progressive activists, it has lately been adopted by a wider range of thinkers, including some libertarians and conservatives. It has also gained support among a cadre of venture capitalists in New York and Silicon Valley, the people most familiar with the potential for technology to alter modern work.
  • ...15 more annotations...
  • tech supporters of U.B.I. consider machine intelligence to be something like a natural bounty for society: The country has struck oil, and now it can hand out checks to each of its citizens.
  • These supporters argue machine intelligence will produce so much economic surplus that we could collectively afford to liberate much of humanity from both labor and suffering.
  • As computers perform more of our work, we’d all be free to become artists, scholars, entrepreneurs or otherwise engage our passions in a society no longer centered on the drudgery of daily labor.
  • “For a couple hundred years, we’ve constructed our entire world around the need to work. Now we’re talking about more than just a tweak to the economy — it’s as foundational a departure as when we went from an agrarian society to an industrial one.”
  • “I think it’s a bad use of a human to spend 20 years of their life driving a truck back and forth across the United States,” Mr. Wenger said. “That’s not what we aspire to do as humans — it’s a bad use of a human brain — and automation and basic income is a development that will free us to do lots of incredible things that are more aligned with what it means to be human.”
  • There is an urgency to the techies’ interest in U.B.I. They argue that machine intelligence reached an inflection point in the last couple of years, and that technological progress now looks destined to change how most of the world works.
  • Wage growth is sluggish, job security is nonexistent, inequality looks inexorable, and the ideas that once seemed like a sure path to a better future (like taking on debt for college) are in doubt. Even where technology has created more jobs, like the so-called gig economy work created by services like Uber, it has only added to our collective uncertainty about the future of work.
  • people are looking at these trends and realizing these questions about the future of work are more real and immediate than they guessed,”
  • A cynic might see the interest of venture capitalists in U.B.I. as a way for them to atone for their complicity in the tech that might lead to permanent changes in the global economy.
  • they don’t see U.B.I. merely as a defense of the current social order. Instead they see automation and U.B.I. as the most optimistic path toward wider social progress.
  • When you give everyone free money, what do people do with their time? Do they goof off, or do they try to pursue more meaningful pursuits? Do they become more entrepreneurial? How would U.B.I. affect economic inequality? How would it alter people’s psychology and mood? Do we, as a species, need to be employed to feel fulfilled, or is that merely a legacy of postindustrial capitalism?
  • Proponents say these questions will be answered by research, which in turn will prompt political change. For now, they argue the proposal is affordable if we alter tax and welfare policies to pay for it, and if we account for the ways technological progress in health care and energy will reduce the amount necessary to provide a basic cost of living.
  • They also note that increasing economic urgency will push widespread political acceptance of the idea. “There’s a sense that growing inequality is intractable, and that we need to do something about it,
  • Andrew L. Stern, a former president of the Service Employees International Union, who is working on a book about U.B.I., compared the feeling of the current anxiety around jobs to a time of war. “I grew up during the Vietnam War, and my parents were antiwar for one reason: I could be drafted,” he said.
  • Today, as people across all income levels become increasingly worried about how they and their children will survive in tech-infatuated America, “we are back to the Vietnam War when it comes to jobs,
Javier E

Will You Lose Your Job to a Robot? Silicon Valley Is Split - NYTimes.com - 0 views

  • The question for Silicon Valley is whether we’re heading toward a robot-led coup or a leisure-filled utopia.
  • nterviews with 2,551 people who make, research and analyze new technology. Most agreed that robotics and artificial intelligence would transform daily life by 2025, but respondents were almost evenly split about what that might mean for the economy and employment.
  • techno-optimists. They believe that even though machines will displace many jobs in a decade, technology and human ingenuity will produce many more, as happened after the agricultural and industrial revolutions. The meaning of “job” might change, too, if people find themselves with hours of free time because the mundane tasks that fill our days are automated.
  • ...8 more annotations...
  • The other half agree that some jobs will disappear, but they are not convinced that new ones will take their place, even for some highly skilled workers. They fear a future of widespread unemployment, deep inequality and violent uprisings — particularly if policy makers and educational institutions don’t step in.
  • We’re going to have to come to grips with a long-term employment crisis and the fact that — strictly from an economic point of view, not a moral point of view — there are more and more ‘surplus humans.'  ”
  • “The degree of integration of A.I. into daily life will depend very much, as it does now, on wealth. The people whose personal digital devices are day-trading for them, and doing the grocery shopping and sending greeting cards on their behalf, are people who are living a different life than those who are worried about missing a day at one of their three jobs due to being sick, and losing the job and being unable to feed their children.”
  • “Only the best-educated humans will compete with machines. And education systems in the U.S. and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorize what is told to them, preparing them for life in a 20th century factory.”
  • “We hardly dwell on the fact that someone trying to pick a career path that is not likely to be automated will have a very hard time making that choice. X-ray technician? Outsourced already, and automation in progress. The race between automation and human work is won by automation.”
  • “Robotic sex partners will be commonplace. … The central question of 2025 will be: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?'  ”
  • “Employment will be mostly very skilled labor — and even those jobs will be continuously whittled away by increasingly sophisticated machines. Live, human salespeople, nurses, doctors, actors will be symbols of luxury, the silk of human interaction as opposed to the polyester of simulated human contact.”
  • The biggest exception will be jobs that depend upon empathy as a core capacity — schoolteacher, personal service worker, nurse. These jobs are often those traditionally performed by women. One of the bigger social questions of the mid-late 2020s will be the role of men in this world.”
Javier E

Opinion | The Deadly Soul of a New Machine - The New York Times - 0 views

  • it’s not too much of a reach to see Flight 610 as representative of the hinge in history we’ve arrived at — with the bots, the artificial intelligence and the social media algorithms now shaping the fate of humanity at a startling pace.
  • Like the correction system in the 737, these inventions are designed to make life easier and safer — or at least more profitable for the owners.
  • The overall idea is to outsource certain human functions, the drudgery and things prone to faulty judgment, while retaining master control. The question is: At what point is control lost and the creations take over? How about now?
  • ...3 more annotations...
  • The C.E.O. of Microsoft, Satya Nadella, hit a similar cautionary note at the company’s recent annual shareholder meeting. Big Tech, he said, should be asking “not what computers can do, but what they should do.”
  • It’s the “can do” part that should scare you. Facebook, once all puppies, baby pictures and high school reunion updates, is a monster of misinformation.
  • s haunting as those final moments inside the cockpit of Flight 610 were, it’s equally haunting to grasp the full meaning of what happened: The system overrode the humans and killed everyone. Our invention. Our folly.
unawinn

U.K. Parliament Asks: Did Russia Try to Sway Brexit Vote? - The New York Times - 0 views

  • The British inquiry adds to the mounting pressure for more disclosure from the internet giants, which have already acknowledged broad Russian efforts to influence national elections in both the United States and France
  • A recent academic study found 13,493 suspected “bot” accounts that appeared to send out automated messages related to the referendum in the run-up to the vote but were removed in its immediate aftermath
  • Russian agents covertly bought advertising on its platform in an effort to swing the 2016 presidential election in favor of Mr. Trump
Javier E

Spain's far-right Vox party shot from social media into parliament overnight. How? - Wa... - 0 views

  • Whereas successful political movements used to have a single ideology, they can now combine several. Think about how record companies put together new pop bands: They do market research, they pick the kinds of faces that match, and then they market the band by advertising it to the most favorable demographic. New political parties can now operate like that: You can bundle together issues, repackage them and then market them, using exactly the same kind of targeted messaging — based on exactly the same kind of market research — that you know has worked in other places.
  • Opposition to Catalan and Basque separatism; opposition to feminism and same-sex marriage; opposition to immigration, especially by Muslims; anger at corruption; boredom with mainstream politics; a handful of issues, such as hunting and gun ownership, that some people care a lot about and others don’t know exist; plus a streak of libertarianism, a talent for mockery and a whiff of nostalgia
  • All of these are the ingredients that have gone into the creation of Vox.
  • ...37 more annotations...
  • The important relationships between Vox and the European far right, as well as the American alt-right, are happening elsewhere.
  • there have been multiple contacts between Vox and the other far-right parties of Europe. In 2017, Abascal met Marine Le Pen, the French far-right leader, as Vox’s Twitter account recorded; on the eve of the election, he tweeted his thanks to Matteo Salvini, the Italian far-right leader, for his support. Abascal and Espinosa both went to Warsaw recently to meet the leaders of the nativist, anti-pluralist Polish ruling party, and Espinosa showed up at the Conservative Political Action Conference in the D.C. area, as well.
  • these are issues that belong to the realm of identity politics, not economics. Espinosa characterized all of them as arguments with “the left
  • the nationalist parties, rooted in their own particular histories, are often in conflict with one another almost by definition.
  • The European far right has now found a set of issues it can unite around. Opposition to immigration, especially Muslim immigration, is one of them; promotion of a socially conservative worldview is another.
  • dislike of same-sex civil unions or African taxi drivers is something that even Austrians and Italians who disagree about the location of their border can share.
  • Alto Data Analytics. Alto, which specializes in applying artificial intelligence to the analysis of public data, such as that found on Twitter, Facebook, Instagram, YouTube and other public sources, recently produced some elegant, colored network maps of the Spanish online conversation, with the goal of identifying disinformation campaigns seeking to distort digital conversations
  • three outlying, polarized conversations — “echo chambers,” whose members are mostly talking and listening only to one another: the Catalan secessionist conversation, the far-left conversation and the Vox conversation. 
  • the largest number of “abnormal, high-activity users” — bots, or else real people who post constantly and probably professionally — were also found within these three communities, especially the Vox community, which accounted for more than half of them
  • uncovered a network of nearly 3,000 “abnormal, high-activity users” that had pumped out nearly 4½ million pro-Vox and anti-Islamic messages on Twitter in the past year
  • For the past couple of years, it has focused on immigration scare stories, gradually increasing their emotional intensity
  • all of it aligns with messages being put out by Vox.
  • a week before Spain’s polling day, the network was tweeting images of what its members described as a riot in a “Muslim neighborhood in France.” In fact, the clip showed a scene from recent anti-government riots in Algeria.
  • Vox supporters, especially the “abnormal, high-activity users,” are very likely to post and tweet content and material from a very particular groups of sources: a set of conspiratorial websites, mostly set up at least a year ago, sometimes run by a single person, which post large quantities of highly partisan articles and headlines.
  • he Alto team had found exactly the same kinds of websites in Italy and Brazil, in the months before those countries’ elections in 2018
  • the websites began putting out partisan material — in Italy, about immigration; in Brazil, about corruption and feminism — during the year before the vote.
  • they served to feed and amplify partisan themes even before they were really part of mainstream politics.
  • In Spain, there are a half-dozen such sites, some quite professional and some clearly amateu
  • One of the more obscure sites has exactly the same style and layout as a pro-Bolsonaro Brazilian site, almost as though both had been designed by the same person
  • The owner of digitalSevilla — according to El Pais, a 24-year-old with no journalism experience — is producing headlines that compare the Andalusian socialist party leader to “the evil lady in Game of Thrones” and, at times, has had more readership than established newspapers
  • They function not unlike Infowars, Breitbart, the infamous partisan sites that operated from Macedonia during the U.S. presidential campaign
  • all of which produced hypercharged, conspiratorial, partisan news and outraged headlines that could then be pumped into hypercharged, conspiratorial echo chambers.
  • he Global Compact for Safe, Orderly and Regular Migration. Though the pact received relatively little mainstream media attention, in the lead-up to that gathering, and in its wake, Alto found nearly 50,000 Twitter users tweeting conspiracy theories about the pact
  • Much like the Spanish network that promotes Vox, these users were promoting material from extremist and conspiratorial websites, using identical images, linking and retweeting one another across borders.
  • A similar international network went into high gear after the fire at Notre Dame Cathedral in Paris. The Institute for Strategic Dialogue tracked thousands of posts from people claiming to have seen Muslims “celebrating” the fire, as well as from people posting rumors and pictures that purported to prove there had been arson
  • These same kinds of memes and images then rippled through Vox’s WhatsApp and Telegram fan groups. These included, for example, an English-language meme showing Paris “before Macron,” with Notre Dame burning, and “after Macron” with a mosque in its place, as well as a news video, which, in fact, had been made about another incident, talking about arrests and gas bombs found in a nearby car. It was a perfect example of the alt-right, the far right and Vox all messaging the same thing, at the same time, in multiple languages, attempting to create the same emotions across Europe, North America and beyond.
  • CitizenGo is part of a larger network of European organizations dedicated to what they call “restoring the natural order”: rolling back gay rights, restricting abortion and contraception, promoting an explicitly Christian agenda. They put together mailing lists and keep in touch with their supporters; the organization claims to reach 9 million people.
  • OpenDemocracy has additionally identified a dozen other U.S.-based organizations that now fund or assist conservative activists in Europe
  • she now runs into CitizenGo, and its language, around the world. Among other things, it has popularized the expression “gender ideology” — a term the Christian right invented, and that has come to describe a whole group of issues, from domestic violence laws to gay rights — in Africa and Latin America, as well as Europe.
  • In Spain, CitizenGo has made itself famous by painting buses with provocative slogans — one carried the hashtag #feminazis and an image of Adolf Hitler wearing lipstick — and driving them around Spanish cities.
  • It’s a pattern we know from U.S. politics. Just as it is possible in the United States to support super PACs that then pay for advertising around issues linked to particular candidates, so is it now possible for Americans, Russians or the Princess von Thurn und Taxis to donate to CitizenGo — and, thus, to support Vox.
  • as most Europeans probably don’t realize — outsiders who want to fund the European far right have been able to do so for some time. OpenDemocracy’s most recent report quotes Arsuaga, the head of CitizenGo, advising a reporter that money given to his group could “indirectly” support Vox, since “we actually currently totally align.”
  • “Make Spain Great Again,” he explained, “was a kind of provocation. . . . It was just intended to make the left a little bit more angry.”
  • The number of actual Spanish Muslims is relatively low — most immigration to Spain is from Latin America — and the number of actual U.S. Muslims is, relatively, even lower. But the idea that Christian civilization needs to redefine itself against the Islamic enemy has, of course, a special historic echo in Spain
  • “We are entering into a period of time when politics is becoming something different, politics is warfare by another means — we don’t want to be killed, we have to survive. . . . I think politics now is winner-takes-all. This is not just a phenomenon in Spain.
  • As Aznar, the former prime minister, said, the party is a “consequence,” though it is not only a consequence of Catalan separatism. It’s also a consequence of Trumpism, of the conspiracy websites, of the international alt-right/far-right online campaign, and especially of a social conservative backlash that has been building across the continent for years.
  • The nationalists, the anti-globalists, the people who are skeptical of international laws and international organizations — they, too, now work together, across borders, for common causes. They share the same contacts. They tap money from the same funders. They are learning from one another’s mistakes, copying one another’s language. And, together, they think they will eventually win.
Javier E

'Fiction is outperforming reality': how YouTube's algorithm distorts truth | Technology... - 0 views

  • There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.
  • Company insiders tell me the algorithm is the single most important engine of YouTube’s growth
  • YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”
  • ...49 more annotations...
  • Lately, it has also become one of the most controversial. The algorithm has been found to be promoting conspiracy theories about the Las Vegas mass shooting and incentivising, through recommendations, a thriving subculture that targets children with disturbing content
  • One YouTube creator who was banned from making advertising revenues from his strange videos – which featured his children receiving flu shots, removing earwax, and crying over dead pets – told a reporter he had only been responding to the demands of Google’s algorithm. “That’s what got us out there and popular,” he said. “We learned to fuel it and do whatever it took to please the algorithm.”
  • academics have speculated that YouTube’s algorithms may have been instrumental in fuelling disinformation during the 2016 presidential election. “YouTube is the most overlooked story of 2016,” Zeynep Tufekci, a widely respected sociologist and technology critic, tweeted back in October. “Its search and recommender algorithms are misinformation engines.”
  • Those are not easy questions to answer. Like all big tech companies, YouTube does not allow us to see the algorithms that shape our lives. They are secret formulas, proprietary software, and only select engineers are entrusted to work on the algorithm
  • Guillaume Chaslot, a 36-year-old French computer programmer with a PhD in artificial intelligence, was one of those engineers.
  • The experience led him to conclude that the priorities YouTube gives its algorithms are dangerously skewed.
  • “YouTube is something that looks like reality, but it is distorted to make you spend more time online,” he tells me when we meet in Berkeley, California. “The recommendation algorithm is not optimising for what is truthful, or balanced, or healthy for democracy.”
  • Chaslot explains that the algorithm never stays the same. It is constantly changing the weight it gives to different signals: the viewing patterns of a user, for example, or the length of time a video is watched before someone clicks away.
  • The engineers he worked with were responsible for continuously experimenting with new formulas that would increase advertising revenues by extending the amount of time people watched videos. “Watch time was the priority,” he recalls. “Everything else was considered a distraction.”
  • Chaslot was fired by Google in 2013, ostensibly over performance issues. He insists he was let go after agitating for change within the company, using his personal time to team up with like-minded engineers to propose changes that could diversify the content people see.
  • He was especially worried about the distortions that might result from a simplistic focus on showing people videos they found irresistible, creating filter bubbles, for example, that only show people content that reinforces their existing view of the world.
  • Chaslot said none of his proposed fixes were taken up by his managers. “There are many ways YouTube can change its algorithms to suppress fake news and improve the quality and diversity of videos people see,” he says. “I tried to change YouTube from the inside but it didn’t work.”
  • YouTube told me that its recommendation system had evolved since Chaslot worked at the company and now “goes beyond optimising for watchtime”.
  • It did not say why Google, which acquired YouTube in 2006, waited over a decade to make those changes
  • Chaslot believes such changes are mostly cosmetic, and have failed to fundamentally alter some disturbing biases that have evolved in the algorithm
  • It finds videos through a word search, selecting a “seed” video to begin with, and recording several layers of videos that YouTube recommends in the “up next” column. It does so with no viewing history, ensuring the videos being detected are YouTube’s generic recommendations, rather than videos personalised to a user. And it repeats the process thousands of times, accumulating layers of data about YouTube recommendations to build up a picture of the algorithm’s preferences.
  • Each study finds something different, but the research suggests YouTube systematically amplifies videos that are divisive, sensational and conspiratorial.
  • When his program found a seed video by searching the query “who is Michelle Obama?” and then followed the chain of “up next” suggestions, for example, most of the recommended videos said she “is a man”
  • He believes one of the most shocking examples was detected by his program in the run-up to the 2016 presidential election. As he observed in a short, largely unnoticed blogpost published after Donald Trump was elected, the impact of YouTube’s recommendation algorithm was not neutral during the presidential race: it was pushing videos that were, in the main, helpful to Trump and damaging to Hillary Clinton.
  • “It was strange,” he explains to me. “Wherever you started, whether it was from a Trump search or a Clinton search, the recommendation algorithm was much more likely to push you in a pro-Trump direction.”
  • Trump won the electoral college as a result of 80,000 votes spread across three swing states. There were more than 150 million YouTube users in the US. The videos contained in Chaslot’s database of YouTube-recommended election videos were watched, in total, more than 3bn times before the vote in November 2016.
  • “Algorithms that shape the content we see can have a lot of impact, particularly on people who have not made up their mind,”
  • “Gentle, implicit, quiet nudging can over time edge us toward choices we might not have otherwise made.”
  • “This research captured the apparent direction of YouTube’s political ecosystem,” he says. “That has not been done before.”
  • I spent weeks watching, sorting and categorising the trove of videos with Erin McCormick, an investigative reporter and expert in database analysis. From the start, we were stunned by how many extreme and conspiratorial videos had been recommended, and the fact that almost all of them appeared to be directed against Clinton.
  • But what was most compelling was how often Chaslot’s software detected anti-Clinton conspiracy videos appearing “up next” beside other videos.
  • There were too many videos in the database for us to watch them all, so we focused on 1,000 of the top-recommended videos. We sifted through them one by one to determine whether the content was likely to have benefited Trump or Clinton. Just over a third of the videos were either unrelated to the election or contained content that was broadly neutral or even-handed. Of the remaining 643 videos, 551 were videos favouring Trump, while only only 92 favoured the Clinton campaign.
  • The sample we had looked at suggested Chaslot’s conclusion was correct: YouTube was six times more likely to recommend videos that aided Trump than his adversary.
  • The spokesperson added: “Our search and recommendation systems reflect what people search for, the number of videos available, and the videos people choose to watch on YouTube. That’s not a bias towards any particular candidate; that is a reflection of viewer interest.”
  • YouTube seemed to be saying that its algorithm was a neutral mirror of the desires of the people who use it – if we don’t like what it does, we have ourselves to blame. How does YouTube interpret “viewer interest” – and aren’t “the videos people choose to watch” influenced by what the company shows them?
  • Offered the choice, we may instinctively click on a video of a dead man in a Japanese forest, or a fake news clip claiming Bill Clinton raped a 13-year-old. But are those in-the-moment impulses really a reflect of the content we want to be fed?
  • YouTube’s recommendation system has probably figured out that edgy and hateful content is engaging. “This is a bit like an autopilot cafeteria in a school that has figured out children have sweet teeth, and also like fatty and salty foods,” she says. “So you make a line offering such food, automatically loading the next plate as soon as the bag of chips or candy in front of the young person has been consumed.”
  • Once that gets normalised, however, what is fractionally more edgy or bizarre becomes, Tufekci says, novel and interesting. “So the food gets higher and higher in sugar, fat and salt – natural human cravings – while the videos recommended and auto-played by YouTube get more and more bizarre or hateful.”
  • “This is important research because it seems to be the first systematic look into how YouTube may have been manipulated,” he says, raising the possibility that the algorithm was gamed as part of the same propaganda campaigns that flourished on Twitter and Facebook.
  • “We believe that the activity we found was limited because of various safeguards that we had in place in advance of the 2016 election, and the fact that Google’s products didn’t lend themselves to the kind of micro-targeting or viral dissemination that these actors seemed to prefer.”
  • Senator Mark Warner, the ranking Democrat on the intelligence committee, later wrote to the company about the algorithm, which he said seemed “particularly susceptible to foreign influence”. The senator demanded to know what the company was specifically doing to prevent a “malign incursion” of YouTube’s recommendation system. Walker, in his written reply, offered few specifics
  • Tristan Harris, a former Google insider turned tech whistleblower, likes to describe Facebook as a “living, breathing crime scene for what happened in the 2016 election” that federal investigators have no access to. The same might be said of YouTube. About half the videos Chaslot’s program detected being recommended during the election have now vanished from YouTube – many of them taken down by their creators. Chaslot has always thought this suspicious. These were videos with titles such as “Must Watch!! Hillary Clinton tried to ban this video”, watched millions of times before they disappeared. “Why would someone take down a video that has been viewed millions of times?” he asks
  • In every case, the largest source of traffic – the invisible force – came from the clips appearing in the “up next” column. William Ramsey, an occult investigator from southern California who made “Irrefutable Proof: Hillary Clinton Has a Seizure Disorder!”, shared screen grabs that showed the recommendation algorithm pushed his video even after YouTube had emailed him to say it violated its guidelines. Ramsey’s data showed the video was watched 2.4m times by US-based users before election day. “For a nobody like me, that’s a lot,” he says. “Enough to sway the election, right?”
  • “I don’t have smoking-gun proof of who logged in to control those accounts,” he says. “But judging from the history of what we’ve seen those accounts doing before, and the characteristics of how they tweet and interconnect, they are assembled and controlled by someone – someone whose job was to elect Trump.”
  • After the Senate’s correspondence with Google over possible Russian interference with YouTube’s recommendation algorithm was made public last week, YouTube sent me a new statement. It emphasised changes it made in 2017 to discourage the recommendation system from promoting some types of problematic content. “We appreciate the Guardian’s work to shine a spotlight on this challenging issue,” it added. “We know there is more to do here and we’re looking forward to making more announcements in the months ahead.”
  • In the months leading up to the election, the Next News Network turned into a factory of anti-Clinton news and opinion, producing dozens of videos a day and reaching an audience comparable to that of MSNBC’s YouTube channel. Chaslot’s research indicated Franchi’s success could largely be credited to YouTube’s algorithms, which consistently amplified his videos to be played “up next”. YouTube had sharply dismissed Chaslot’s research.
  • I contacted Franchi to see who was right. He sent me screen grabs of the private data given to people who upload YouTube videos, including a breakdown of how their audiences found their clips. The largest source of traffic to the Bill Clinton rape video, which was viewed 2.4m times in the month leading up to the election, was YouTube recommendations.
  • The same was true of all but one of the videos Franchi sent me data for. A typical example was a Next News Network video entitled “WHOA! HILLARY THINKS CAMERA’S OFF… SENDS SHOCK MESSAGE TO TRUMP” in which Franchi, pointing to a tiny movement of Clinton’s lips during a TV debate, claims she says “fuck you” to her presidential rival. The data Franchi shared revealed in the month leading up to the election, 73% of the traffic to the video – amounting to 1.2m of its views – was due to YouTube recommendations. External traffic accounted for only 3% of the views.
  • many of the other creators of anti-Clinton videos I spoke to were amateur sleuths or part-time conspiracy theorists. Typically, they might receive a few hundred views on their videos, so they were shocked when their anti-Clinton videos started to receive millions of views, as if they were being pushed by an invisible force.
  • I shared the entire database of 8,000 YouTube-recommended videos with John Kelly, the chief executive of the commercial analytics firm Graphika, which has been tracking political disinformation campaigns. He ran the list against his own database of Twitter accounts active during the election, and concluded many of the videos appeared to have been pushed by networks of Twitter sock puppets and bots controlled by pro-Trump digital consultants with “a presumably unsolicited assist” from Russia.
  • Daniel Alexander Cannon, a conspiracy theorist from South Carolina, tells me: “Every video I put out about the Clintons, YouTube would push it through the roof.” His best-performing clip was a video titled “Hillary and Bill Clinton ‘The 10 Photos You Must See’”, essentially a slideshow of appalling (and seemingly doctored) images of the Clintons with voiceover in which Cannon speculates on their health. It has been seen 3.7m times on YouTube, and 2.9m of those views, Cannon said, came from “up next” recommendations.
  • his research also does something more important: revealing how thoroughly our lives are now mediated by artificial intelligence.
  • Less than a generation ago, the way voters viewed their politicians was largely shaped by tens of thousands of newspaper editors, journalists and TV executives. Today, the invisible codes behind the big technology platforms have become the new kingmakers.
  • They pluck from obscurity people like Dave Todeschini, a retired IBM engineer who, “let off steam” during the election by recording himself opining on Clinton’s supposed involvement in paedophilia, child sacrifice and cannibalism. “It was crazy, it was nuts,” he said of the avalanche of traffic to his YouTube channel, which by election day had more than 2m views
1 - 20 of 43 Next › Last »
Showing 20 items per page