Skip to main content

Home/ TOK Friends/ Group items tagged bots

Rss Feed Group items tagged

Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
Javier E

How a Simple Spambot Became the Second Most Powerful Member of an Italian Social Networ... - 0 views

  • Luca Maria Aiello and a few pals from the University of Turin in Italy began studying a social network called aNobii.com in which people exchange information and opinions about the books they love. Each person has a site that anybody can visit. Users can then choose to set up social links with others
  • To map out the structure of the network, Aiello and co-created an automated crawler that starts by visiting one person’s profile on the network and then all of the people that connect to this node in turn. It then visits each of the people that link to these nodes and so on. In this way, the bot builds up a map of the network
  • people began to respond to the crawler’s visits. That gave the team an idea. “The unexpected reactions the bot caused by its visits motivated us to set up a social experiment in two parts to answer the question: can an individual with no trust gain popularity and influence?”
  • ...8 more annotations...
  • Aiello and co were careful to ensure that the crawler did not engage with anybody on the network in any way other than to visit his or her node. Their idea was to isolate a single, minimal social activity and test how effective it was in gaining popularity.
  • They began to record the reactions to lajello’s visits including the number of messages it received, their content, the links it received and how they varied over time and so on.
  • By December 2011, lajello’s profile had become one of the most popular on the entire social network. It had received more than 66,000 visits as well as 2435 messages from more than 1200 different people.  In terms of the number of different message received, a well-known writer was the most popular on this network but lajello was second.
  • “Our experiment gives strong support to the thesis that popularity can be gained just with continuous “social probing”,” conclude Aiello and co. “We have shown that a very simple spambot can attract great interest even without emulating any aspects of typical human behavior.”
  • Having created all this popularity, Aiello and co wanted to find out how influential the spam bot could be. So they started using the bot to send recommendations to users on who else to connect to.The spam bot could either make a recommendation chosen at random or one that was carefully selected by a recommendation engine. It then made its recommendations to users that had already linked to lajello and to other users chosen at random.
  • “Among the 361 users who created at least one social connection in the 36 hours after the recommendation, 52 per cent followed suggestion given by the bot,” they say.
  • shows just how easy it is for an automated bot to play a significant role in a social network. Popularity appears easy to buy using nothing more than page visits, at least in this experiment. What is more, this popularity can be easily translated into influence
  • It is not hard to see the significance of this work. Social bots are a fact of life on almost every social network and many have become so sophisticated they are hard to distinguish from humans. If the simplest of bots created by Aiello and co can have this kind of impact, it is anybody’s guess how more advanced bots could influence everything from movie reviews and Wikipedia entries to stock prices and presidential elections.
Javier E

Opinion | Chatbots Are a Danger to Democracy - The New York Times - 0 views

  • longer-term threats to democracy that are waiting around the corner. Perhaps the most serious is political artificial intelligence in the form of automated “chatbots,” which masquerade as humans and try to hijack the political process
  • Increasingly, they take the form of machine learning systems that are not painstakingly “taught” vocabulary, grammar and syntax but rather “learn” to respond appropriately using probabilistic inference from large data sets, together with some human guidance.
  • In the buildup to the midterms, for instance, an estimated 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots.
  • ...21 more annotations...
  • In the days following the disappearance of the columnist Jamal Khashoggi, Arabic-language social media erupted in support for Crown Prince Mohammed bin Salman, who was widely rumored to have ordered his murder. On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets. “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” In all likelihood, the majority of these messages were generated by chatbots.
  • around a fifth of all tweets discussing the 2016 presidential election are believed to have been the work of chatbots.
  • a third of all traffic on Twitter before the 2016 referendum on Britain’s membership in the European Union was said to come from chatbots, principally in support of the Leave side.
  • It’s irrelevant that current bots are not “smart” like we are, or that they have not achieved the consciousness and creativity hoped for by A.I. purists. What matters is their impact
  • In the past, despite our differences, we could at least take for granted that all participants in the political process were human beings. This no longer true
  • Increasingly we share the online debate chamber with nonhuman entities that are rapidly growing more advanced
  • a bot developed by the British firm Babylon reportedly achieved a score of 81 percent in the clinical examination for admission to the Royal College of General Practitioners. The average score for human doctors? 72 percent.
  • If chatbots are approaching the stage where they can answer diagnostic questions as well or better than human doctors, then it’s possible they might eventually reach or surpass our levels of political sophistication
  • chatbots could seriously endanger our democracy, and not just when they go haywire.
  • They’ll likely have faces and voices, names and personalities — all engineered for maximum persuasion. So-called “deep fake” videos can already convincingly synthesize the speech and appearance of real politicians.
  • The most obvious risk is that we are crowded out of our own deliberative processes by systems that are too fast and too ubiquitous for us to keep up with.
  • A related risk is that wealthy people will be able to afford the best chatbots.
  • in a world where, increasingly, the only feasible way of engaging in debate with chatbots is through the deployment of other chatbots also possessed of the same speed and facility, the worry is that in the long run we’ll become effectively excluded from our own party.
  • the wholesale automation of deliberation would be an unfortunate development in democratic history.
  • A blunt approach — call it disqualification — would be an all-out prohibition of bots on forums where important political speech takes place, and punishment for the humans responsible
  • The Bot Disclosure and Accountability Bil
  • would amend the Federal Election Campaign Act of 1971 to prohibit candidates and political parties from using any bots intended to impersonate or replicate human activity for public communication. It would also stop PACs, corporations and labor organizations from using bots to disseminate messages advocating candidates, which would be considered “electioneering communications.”
  • A subtler method would involve mandatory identification: requiring all chatbots to be publicly registered and to state at all times the fact that they are chatbots, and the identity of their human owners and controllers.
  • We should also be exploring more imaginative forms of regulation. Why not introduce a rule, coded into platforms themselves, that bots may make only up to a specific number of online contributions per day, or a specific number of responses to a particular human?
  • We need not treat the speech of chatbots with the same reverence that we treat human speech. Moreover, bots are too fast and tricky to be subject to ordinary rules of debate
  • the methods we use to regulate bots must be more robust than those we apply to people. There can be no half-measures when democracy is at stake.
Javier E

Opinion | Your Angry Uncle Wants to Talk About Politics. What Do You Do? - The New York... - 0 views

  • In our combined years of experience helping people talk about difficult political issues from abortion to guns to race, we’ve found most can converse productively without sacrificing their beliefs or spoiling dinner
  • It’s not merely possible to preserve your relationships while talking with folks you disagree with, but engaging respectfully will actually make you a more powerful advocate for the causes you care about.
  • The key to persuasive political dialogue is creating a safe and welcoming space for diverse views with a compassionate spirit, active listening and personal storytelling
  • ...4 more annotations...
  • Select your reply I’m more liberal, so I’ll chat with Conservative Uncle Bot. I’m more conservative, so I’ll chat with Liberal Uncle Bot.
  • Hey, it’s the Angry Uncle Bot. I have LOTS of opinions. But what kind of Uncle Bot do you want to chat with?
  • To help you cook up a holiday impeachment conversation your whole family and country will appreciate, here’s the Angry Uncle Bot for practice.
  • As Americans gather for our annual Thanksgiving feast, many are sharpening their rhetorical knives while others are preparing to bury their heads in the mashed potatoes.
grayton downing

Send in the Bots | The Scientist Magazine® - 0 views

  • any hypothesis, his idea needed to be tested. But measuring brain activity in a moving ant—the most direct way to determine cognitive processing during animal decision making—was not possible. So Garnier didn’t study ants; he studied robots. U
  • The robots then navigated the environment by sensing light intensity through two sensors on their “heads.”
  • , several groups have used autonomous robots that sense and react to their environments to “debunk the idea that you need higher cognitive processing to do what look like cognitive things,”
  • ...10 more annotations...
  • a growing number of scientists are using autonomous robots to interrogate animal behavior and cognition. Researchers have designed robots to behave like ants, cockroaches, rodents, chickens, and more, then deployed their bots in the lab or in the environment to see how similarly they behave to their flesh-and-blood counterparts.
  • robots give behavioral biologists the freedom to explore the mind of an animal in ways that would not be possible with living subjects, says University of Sheffield researcher James Marshall, who in March helped launch a 3-year collaborative project to build a flying robot controlled by a computer-run simulation of the entire honeybee brain.
  • “I really think there is a lot to be discovered by doing the engineering side along with the science.”
  • Not only did the bots move around the space like the rat pups did, they aggregated in remarkably similar ways to the real animals.3 Then Schank realized that there was a bug in his program. The robots weren’t following his predetermined rules; they were moving randomly.
  • Animal experiments are still needed to advance neuroscience.” But, he adds, robots may prove to be an indispensable new ethological tool for focusing the scope of research. “If you can have good physical models,” Prescott says, “then you can reduce the number of experiments and only do the ones that answer really important questions.”
  • animal-mimicking robots is not easy, however, particularly when knowledge of the system’s biology is lacking.
  • However, when the researchers also gave the robots a sense of flow, and programmed them to assume that odors come from upstream, the bots much more closely mimicked real lobster behavior. “That was a demonstration that the animals’ brains were multimodal—that they were using chemical information and flow information,” says Grasso, who has since worked on robotic models of octopus arms and crayfish.
  • some sense, the use of robotics in animal-behavior research is not that new. Since the inception of the field of ethology, researchers have been using simple physical models of animals—“dummies”—to examine the social behavior of real animals, and biologists began animating their dummies as soon as technology would allow. “The fundamental problem when you’re studying an interaction between two individuals is that it’s a two-way interaction—you’ve got two players whose behaviors are both variable,”
  • building a robot that animals will accept as one of their own is complicated, to say the least.
  • handful of other researchers have also successfully integrated robots with live animals—including fish, ducks, and chickens. There are several notable benefits to intermixing robots and animals; first and foremost, control. “One of the problems when studying behavior is that, of course, it’s very difficult to have control of animals, and so it’s hard for us to interpret fully how they interact with each other
Javier E

A scholar asks, 'Can democracy survive the Internet?' - The Washington Post - 0 views

  • Nathaniel Persily, a law professor at Stanford University
  • has written about this in a forthcoming issue of the Journal of Democracy in an article with a title that sums up his concerns: “Can Democracy Survive the Internet?”
  • Persily argues that the 2016 campaign broke down previously established rules and distinctions “between insiders and outsiders, earned media and advertising, media and non-media, legacy media and new media, news and entertainment and even foreign and domestic sources of campaign communication.”
  • ...10 more annotations...
  • Clinton played by old rules; Trump did not. He recognized the potential rewards of exploiting what the Internet offered, and he conducted his campaign through unconventional means.
  • “That’s what Donald Trump realized that a lot of us didn’t,” Persily said. “That it was more important to swamp the communication environment than it was to advocate for a particular belief or fight for the truth of a particular story,”
  • Persily notes that the Internet reacted to the Trump campaign “like an ecosystem welcoming a new and foreign species. His candidacy triggered new strategies and promoted established Internet forces. Some of these (such as the ‘alt-right’) were moved by ideological affinity, while others sought to profit financially or to further a geopolitical agenda.
  • The rise and power of the Internet has accelerated the decline of institutions that once provided a mediating force in campaigns. Neither the legacy media nor the established political parties exercise the power they once had as referees, particularly in helping to sort out the integrity of information.
  • legacy media that once helped set the agenda for political conversation now often take their cues from new media.
  • The Internet, however, involves characteristics that heighten the disruptive and damaging influences on political campaigns. One, Persily said, is the velocity of information, the speed with which news, including fake news, moves and expands and is absorbed. Viral communication can create dysfunction in campaigns and within democracies.
  • Another factor is the pervasiveness of anonymous communication, clearly greater and more odious today. Anonymity facilitates a coarsening of speech on the Internet. It has become more and more difficult to determine the sources of such information, including whether these communications are produced by real people or by automated programs known as “bots.”
  • “the prevalence of bots in spreading propaganda and fake news appears to have reached new heights. One study found that between 16 September and 21 October 2016, bots produced about a fifth of all tweets related to the upcoming election. Across all three presidential debates, pro-Trump twitter bots generated about four times as many tweets as pro-Clinton bots. During the final debate in particular, that figure rose to seven times as many.”
  • the fear of dark money and “shady outsiders” running television commercials “seems quaint when compared to networks of thousands of bots of uncertain geographic origin creating automated messages designed to malign candidates and misinform voters.”
  • When asked how worrisome all this is, Persily said, “I’m extremely concerned.” He was quick to say he did not believe government should or even could regulate this new environment. But, he said, “We need to come to grips with how the new communication environment affects people’s political beliefs, the information they receive and then the choices that they make.”
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
Javier E

Elon Musk Is Not Playing Four-Dimensional Chess - 0 views

  • Musk is not wrong that Twitter is chock-full of noise and garbage, but the most pernicious stuff comes from real people and a media ecosystem that amplifies and rewards incendiary bullshit
  • This dynamic is far more of a problem for Twitter (but also the news media and the internet in general) than shadowy bot farms are. But it’s also a dilemma without much of a concrete solution
  • Were Musk actually curious or concerned with the health of the online public discourse, he might care about the ways that social media platforms like Twitter incentivize this behavior and create an information economy where our sense of proportion on a topic can be so easily warped. But Musk isn’t interested in this stuff, in part because he is a huge beneficiary of our broken information environment and can use it to his advantage to remain constantly in the spotlight.
  • ...3 more annotations...
  • Musk’s concern with bots isn’t only a bullshit tactic he’s using to snake out of a bad business deal and/or get a better price for Twitter; it’s also a great example of his shallow thinking. The man has at least some ability to oversee complex engineering systems that land rockets, but his narcissism affords him a two-dimensional understanding of the way information travels across social media.
  • He is drawn to the conspiratorial nature of bots and information manipulation, because it is a more exciting and easier-to-understand solution to more complex or uncomfortable problems. Instead of facing the reality that many people dislike him as a result of his personality, behavior, politics, or shitty management style, he blames bots. Rather than try to understand the gnarly mechanics and hard-to-solve problems of democratized speech, he sorts them into overly simplified boxes like censorship and spam and then casts himself as the crusading hero who can fix it all. But he can’t and won’t, because he doesn’t care enough to find the answers.
  • Musk isn’t playing chess or even checkers. He’s just the richest man in the world, bored, mad, and posting like your great-uncle.
Javier E

Microsoft Created a Twitter Bot to Learn From Users. It Quickly Became a Racist Jerk. -... - 0 views

  • Microsoft, in an emailed statement, described the machine-learning project as a social and cultural experiment.
  • Microsoft said the artificial intelligence project had been designed to “engage and entertain people” through “casual and playful conversation,” and that it was built through mining public data. It was targeted at 18- to 24-year-olds in the United States and was developed by a staff that included improvisational comedians.
Javier E

It's True: False News Spreads Faster and Wider. And Humans Are to Blame. - The New York... - 0 views

  • What if the scourge of false news on the internet is not the result of Russian operatives or partisan zealots or computer-controlled bots? What if the main problem is us?
  • People are the principal culprits
  • people, the study’s authors also say, prefer false news.
  • ...18 more annotations...
  • As a result, false news travels faster, farther and deeper through the social network than true news.
  • those patterns applied to every subject they studied, not only politics and urban legends, but also business, science and technology.
  • The stories were classified as true or false, using information from six independent fact-checking organizations including Snopes, PolitiFact and FactCheck.org
  • with or without the bots, the results were essentially the same.
  • “It’s not really the robots that are to blame.”
  • “News” and “stories” were defined broadly — as claims of fact — regardless of the source. And the study explicitly avoided the term “fake news,” which, the authors write, has become “irredeemably polarized in our current political and media climate.”
  • False claims were 70 percent more likely than the truth to be shared on Twitter. True stories were rarely retweeted by more than 1,000 people, but the top 1 percent of false stories were routinely shared by 1,000 to 100,000 people. And it took true stories about six times as long as false ones to reach 1,500 people.
  • the researchers enlisted students to annotate as true or false more than 13,000 other stories that circulated on Twitter.
  • “The comprehensiveness is important here, spanning the entire history of Twitter,” said Jon Kleinberg, a computer scientist at Cornell University. “And this study shines a spotlight on the open question of the success of false information online.”
  • The M.I.T. researchers pointed to factors that contribute to the appeal of false news. Applying standard text-analysis tools, they found that false claims were significantly more novel than true ones — maybe not a surprise, since falsehoods are made up.
  • The goal, said Soroush Vosoughi, a postdoctoral researcher at the M.I.T. Media Lab and the lead author, was to find clues about what is “in the nature of humans that makes them like to share false news.”
  • The study analyzed the sentiment expressed by users in replies to claims posted on Twitter. As a measurement tool, the researchers used a system created by Canada’s National Research Council that associates English words with eight emotions
  • False claims elicited replies expressing greater surprise and disgust. True news inspired more anticipation, sadness and joy, depending on the nature of the stories.
  • The M.I.T. researchers said that understanding how false news spreads is a first step toward curbing it. They concluded that human behavior plays a large role in explaining the phenomenon, and mention possible interventions, like better labeling, to alter behavior.
  • For all the concern about false news, there is little certainty about its influence on people’s beliefs and actions. A recent study of the browsing histories of thousands of American adults in the months before the 2016 election found that false news accounted for only a small portion of the total news people consumed.
  • In fall 2016, Mr. Roy, an associate professor at the M.I.T. Media Lab, became a founder and the chairman of Cortico, a nonprofit that is developing tools to measure public conversations online to gauge attributes like shared attention, variety of opinion and receptivity. The idea is that improving the ability to measure such attributes would lead to better decision-making that would counteract misinformation.
  • Mr. Roy acknowledged the challenge in trying to not only alter individual behavior but also in enlisting the support of big internet platforms like Facebook, Google, YouTube and Twitter, and media companies
  • “Polarization,” he said, “has turned out to be a great business model.”
Javier E

Early Facebook and Google Employees Form Coalition to Fight What They Built - The New Y... - 0 views

  • A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding together to challenge the companies they helped build.
  • The campaign, titled The Truth About Tech, will be funded with $7 million from Common Sense and capital raised by the Center for Humane Technology. Common Sense also has $50 million in donated media and airtime
  • . It will be aimed at educating students, parents and teachers about the dangers of technology, including the depression that can come from heavy use of social media.
  • ...9 more annotations...
  • Chamath Palihapitiya, a venture capitalist who was an early employee at Facebook, said in November that the social network was “ripping apart the social fabric of how society works.”
  • The new Center for Humane Technology includes an unprecedented alliance of former employees of some of today’s biggest tech companies. Apart from Mr. Harris, the center includes Sandy Parakilas, a former Facebook operations manager; Lynn Fox, a former Apple and Google communications executive; Dave Morin, a former Facebook executive; Justin Rosenstein, who created Facebook’s Like button and is a co-founder of Asana; Roger McNamee, an early investor in Facebook; and Renée DiResta, a technologist who studies bots.
  • Its first project to reform the industry will be to introduce a Ledger of Harms — a website aimed at guiding rank-and-file engineers who are concerned about what they are being asked to build. The site will include data on the health effects of different technologies and ways to make products that are healthier
  • “Facebook appeals to your lizard brain — primarily fear and anger,” he said. “And with smartphones, they’ve got you for every waking moment.”
  • Apple’s chief executive, Timothy D. Cook, told The Guardian last month that he would not let his nephew on social media, while the Facebook investor Sean Parker also recently said of the social network that “God only knows what it’s doing to our children’s brains.”Mr. Steyer said, “You see a degree of hypocrisy with all these guys in Silicon Valley.”
  • The new group also plans to begin lobbying for laws to curtail the power of big tech companies. It will initially focus on two pieces of legislation: a bill being introduced by Senator Edward J. Markey, Democrat of Massachusetts, that would commission research on technology’s impact on children’s health, and a bill in California by State Senator Bob Hertzberg, a Democrat, which would prohibit the use of digital bots without identification.
  • Mr. McNamee said he had joined the Center for Humane Technology because he was horrified by what he had helped enable as an early Facebook investor.
  • Truth About Tech campaign was modeled on antismoking drives and focused on children because of their vulnerability.
  • He said the people who made these products could stop them before they did more harm.
manhefnawi

Why We Keep Falling for Fake News | Mental Floss - 0 views

  • Some studies have found that viral ideas arise at the intersection of busy social networks and limited attention spans. In a perfect world, only factually accurate, carefully reported and fact-checked stories would go viral. But that isn’t necessarily the case. Misinformation and hoaxes spread across the internet, and especially social media, like a forest fire in dry season.
  • Within the model, a successful viral story required two elements: a network already flooded with information, and users' limited attention spans. The more bot posts in a network, the more users were overwhelmed, and the more likely it was that fake news would spread.
  • One way to increase the discriminative power of online social media would be to reduce information load by limiting the number of posts in the system," they say. "Currently, bot accounts controlled by software make up a significant portion of online profiles, and many of them flood social media with high volumes of low-quality information to manipulate public discourse. By aggressively curbing this kind of abuse, social media platforms could improve the overall quality of information to which we are exposed
Javier E

Reality Is Broken. We Have AI Photos to Blame. - WSJ - 0 views

  • AI headshots aren’t yet perfect, but they’re so close I expect we’ll start seeing them on LinkedIn, Tinder and other social profiles. Heck, we may already see them. How would we know?
  • Welcome to our new reality, where nothing is real. We now have photos initially captured with cameras that AI changes into something that never was
  • Or, like the headshot above, there are convincingly photographic images AI generates out of thin air.
  • ...11 more annotations...
  • Adobe ADBE 7.19%increase; green up pointing triangle, maker of the Photoshop, released a new tool in Firefly, its generative-AI image suite, that lets you change and add in parts of a photo with AI imagery. Earlier this month, Google showed off a new Magic Editor, initially for Pixel phones, that allows you to easily manipulate a scene. And people are all over TikTok posting the results of AI headshot services like Try It On.
  • After testing a mix of AI editing and generating tools, I just have one question for all of you armchair philosophers: What even is a photo anymore?
  • I have always wondered what I’d look like as a naval officer. Now I don’t have to. I snapped a selfie and uploaded it to Adobe Firefly’s generative-fill tool. One click of the Background button and my cluttered office was wiped out. I typed “American flag” and in it went. Then I selected the Add tool, erased my torso and typed in “naval uniform.” Boom! Adobe even found me worthy of numerous awards and decorations.
  • Astronaut, fighter pilot, pediatrician. I turned myself into all of them in under a minute each. The AI-generated images did have noticeable issues: The uniforms were strange and had odd lettering, the stethoscope seemed to be cut in half and the backgrounds were warped and blurry. Yet the final images are fun, and the quality will only get better. 
  • In FaceApp, for iOS and Android, I was able to change my frown to a smile—with the right amount of teeth! I was also able to add glasses and change my hair color. Some said it looked completely real, others who know me well figured something was up. “Your teeth look too perfect.”
  • The real reality-bending happens in Midjourney, which can turn text prompts into hyper-realistic images and blend existing images in new ways. The image quality of generated images exceeds OpenAI’s Dall-E and Adobe’s Firefly.
  • it’s more complicated to use, since it runs through the chat app Discord. Sign up for service, access the Midjourney bot through your Discord account (via web or app), then start typing in prompts. My video producer Kenny Wassus started working with a more advanced Midjourney plugin called Insight Face Swap-Bot, which allows you to sub in a face to a scene you’ve already made. He’s become a master—making me a Game of Thrones warrior and a Star Wars rebel, among other things.
  • We’re headed for a time when we won’t be able to tell how manipulated a photo is, what parts are real or fake.
  • when influential messages are conveyed through images—be they news or misinformation—people have reason to know a photo’s origin and what’s been done to it.
  • Firefly adds a “content credential,” digital information baked into the file, that says the image was manipulated with AI. Adobe is pushing to get news, tech and social-media platforms to use this open-source standard so we can all understand where the images we see came from.
  • So, yeah, our ability to spot true photos might depend on the cooperation of the entire internet. And by “true photo,” I mean one that captures a real moment—where you’re wearing your own boring clothes and your hair is just so-so, but you have the exact right number of teeth in your head.
Javier E

An Unholy Alliance Between Ye, Musk, and Trump - The Atlantic - 0 views

  • Musk, Trump, and Ye are after something different: They are all obsessed with setting the rules of public spaces.
  • An understandable consensus began to form on the political left that large social networks, but especially Facebook, helped Trump rise to power. The reasons were multifaceted: algorithms that gave a natural advantage to the most shameless users, helpful marketing tools that the campaign made good use of, a confusing tangle of foreign interference (the efficacy of which has always been tough to suss out), and a basic attentional architecture that helps polarize and pit Americans against one another (no foreign help required).
  • The misinformation industrial complex—a loosely knit network of researchers, academics, journalists, and even government entities—coalesced around this moment. Different phases of the backlash homed in on bots, content moderation, and, after the Cambridge Analytica scandal, data privacy
  • ...15 more annotations...
  • the broad theme was clear: Social-media platforms are the main communication tools of the 21st century, and they matter.
  • With Trump at the center, the techlash morphed into a culture war with a clear partisan split. One could frame the position from the left as: We do not want these platforms to give a natural advantage to the most shameless and awful people who stoke resentment and fear to gain power
  • On the right, it might sound more like: We must preserve the power of the platforms to let outsiders have a natural advantage (by stoking fear and resentment to gain power).
  • the political world realized that platforms and content-recommendation engines decide which cultural objects get amplified. The left found this troubling, whereas the right found it to be an exciting prospect and something to leverage, exploit, and manipulate via the courts
  • Crucially, both camps resent the power of the technology platforms and believe the companies have a negative influence on our discourse and politics by either censoring too much or not doing enough to protect users and our political discourse.
  • one outcome of the techlash has been an incredibly facile public understanding of content moderation and a whole lot of culture warring.
  • Musk and Ye aren’t so much buying into the right’s overly simplistic Big Tech culture war as they are hijacking it for their own purposes; Trump, meanwhile, is mostly just mad
  • Each one casts himself as an antidote to a heavy-handed, censorious social-media apparatus that is either captured by progressive ideology or merely pressured into submission by it. But none of them has any understanding of thorny First Amendment or content-moderation issues.
  • They embrace a shallow posture of free-speech maximalism—the very kind that some social-media-platform founders first espoused, before watching their sites become overrun with harassment, spam, and other hateful garbage that drives away both users and advertisers
  • for those who can hit the mark without getting banned, social media is a force multiplier for cultural and political relevance and a way around gatekeeping media.
  • Musk, Ye, and Trump rely on their ability to pick up their phones, go direct, and say whatever they wan
  • the moment they butt up against rules or consequences, they begin to howl about persecution and unfair treatment. The idea of being treated similarly to the rest of a platform’s user base
  • is so galling to these men that they declare the entire system to be broken.
  • they also demonstrate how being the Main Character of popular and political culture can totally warp perspective. They’re so blinded by their own outlying experiences across social media that, in most cases, they hardly know what it is they’re buying
  • These are projects motivated entirely by grievance and conflict. And so they are destined to amplify grievance and conflict
Javier E

AI is about to completely change how you use computers | Bill Gates - 0 views

  • Health care
  • Entertainment and shopping
  • Today, AI’s main role in healthcare is to help with administrative tasks. Abridge, Nuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.
  • ...38 more annotations...
  • agents will open up many more learning opportunities.
  • Already, AI can help you pick out a new TV and recommend movies, books, shows, and podcasts. Likewise, a company I’ve invested in, recently launched Pix, which lets you ask questions (“Which Robert Redford movies would I like and where can I watch them?”) and then makes recommendations based on what you’ve liked in the past
  • Productivity
  • copilots can do a lot—such as turn a written document into a slide deck, answer questions about a spreadsheet using natural language, and summarize email threads while representing each person’s point of view.
  • before the sophisticated agents I’m describing become a reality, we need to confront a number of questions about the technology and how we’ll use it.
  • Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.
  • To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want. It will be able to write the code, design the look and feel of the app, create a logo, and publish the app to an online store
  • Agents will do even more. Having one will be like having a person dedicated to helping you with various tasks and doing them independently if you want. If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like
  • For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job.
  • Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it.
  • I don’t think any single company will dominate the agents business--there will be many different AI engines available.
  • The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
  • They’ll replace word processors, spreadsheets, and other productivity apps.
  • Education
  • For example, few families can pay for a tutor who works one-on-one with a student to supplement their classroom work. If agents can capture what makes a tutor effective, they’ll unlock this supplemental instruction for everyone who wants it. If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today’s text-based tutors.
  • your agent will be able to help you in the same way that personal assistants support executives today. If your friend just had surgery, your agent will offer to send flowers and be able to order them for you. If you tell it you’d like to catch up with your old college roommate, it will work with their agent to find a time to get together, and just before you arrive, it will remind you that their oldest child just started college at the local university.
  • To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences.
  • The current state of the art is Khanmigo, a text-based bot created by Khan Academy. It can tutor students in math, science, and the humanities—for example, it can explain the quadratic formula and create math problems to practice on. It can also help teachers do things like write lesson plans.
  • Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.
  • other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?
  • In the computing industry, we talk about platforms—the technologies that apps and services are built on. Android, iOS, and Windows are all platforms. Agents will be the next platform.
  • A shock wave in the tech industry
  • Agents won’t simply make recommendations; they’ll help you act on them. If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision.
  • Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you
  • they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter.
  • Companies will be able to make agents available for their employees to consult directly and be part of every meeting so they can answer questions.
  • AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.
  • If the number of companies that have started working on AI just this year is any indication, there will be an exceptional amount of competition, which will make agents very inexpensive.
  • Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.
  • Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.
  • The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people
  • The ramifications for the software business and for society will be profound.
  • In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
  • You’ll also be able to get news and entertainment that’s been tailored to your interests. CurioAI, which creates a custom podcast on any subject you ask about, is a glimpse of what’s coming.
  • An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.
  • even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
  • In the distant future, agents may even force humans to face profound questions about purpose. Imagine that agents become so good that everyone can have a high quality of life without working nearly as much. In a future like that, what would people do with their time? Would anyone still want to get an education when an agent has all the answers? Can you have a safe and thriving society when most people have a lot of free time on their hands?
  • They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.
Javier E

Why Silicon Valley can't fix itself | News | The Guardian - 1 views

  • After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.
  • Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.”
  • Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.
  • ...52 more annotations...
  • It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity
  • The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.
  • its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction.
  • In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.
  • the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track
  • The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”,
  • In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention.
  • After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.
  • these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires
  • Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform
  • Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes
  • they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.
  • To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”
  • this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives
  • Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.
  • Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.
  • They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.
  • The story of our species began when we began to make tools
  • All of which is to say: humanity and technology are not only entangled, they constantly change together.
  • This is not just a metaphor. Recent research suggests that the human hand evolved to manipulate the stone tools that our ancestors used
  • The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities
  • Yet as we lose certain capacities, we gain new ones.
  • The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology
  • Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.
  • Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.
  • Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes
  • The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.
  • They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit.
  • This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.
  • Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power.
  • these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.
  • reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”
  • Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable
  • not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”
  • Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently.
  • Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them
  • Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact
  • emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform.
  • “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics
  • industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.
  • there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use.
  • It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.
  • The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power
  • The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.
  • If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right
  • The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.
  • Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology
  • What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power.
  • Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources
  • democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation
  • This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run.
  • we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.
Javier E

Can We Save the Truth? | History News Network - 0 views

  • For my own writing, I have settled on a method of writing and rewriting in which I seek to improve places where I use imprecise categories and labels, where I slide over gaps in my knowledge with vague phrases, where my ignorance leads to false statements. I find and fix many such places in the process of revising. I hope to produce writing which is as close as possible to being objective and true.
  • That I have such goals indicates that I do not accept the idea that there is no truth. I do believe that truth is very hard to reach, that nearly every proposition in history or science can be improved by more work, that we are imperfect seekers of truth. So we can approach truth, but perhaps never reach it.
  • I was prompted to write this because of the great irony that the political conservatism, which once argued for objective truth, now relies on the broadest attack on truth that we have ever experienced.
  • ...10 more annotations...
  • Conservative historians asserted that relativists were ruining everything, that truth did exist. They criticized post-modernism as a mask for moral relativism, connected this immorality with the popular movements of the 1960s, and asserted their own moral primacy (the Moral Majority).
  • The platform of the Republican Party about climate change and health care, two of our most pressing issues, is just one big lie
  • The intersection of a Republican Party which sees no value in distinguishing between truth and lies and an emerging technology that makes spreading lies incredibly easy is a great political danger. Is there truth? Not if those in power in America don’t care.
  • The use of a fabricated story about Ukraine and Joe Biden is a set of lies, that then led to one of the greatest scenes of collective public lying in American history, the response of Republican Representatives and Senators to the impeachment.
  • We are being bombarded with carefully crafted lies throughout cyberspace, designed to distort the results of the 2020 election. False stories about Joe Biden and Ukraine have already spread virally to millions of people.
  • Disinformation spread by bots can come from anywhere on the globe. The technology is non-partisan.
  • today we suffer from a multiplication of lies as a Republican tactic to win elections.
  • Does this mean that there is no truth? That any statement can be shown to be untrue by people with a different point of view? Objectivity is impossible, so there is no objective truth
  • This line of thinking was taken up especially by literary scholars, who argued that every text has multiple, perhaps infinite meanings. There is no true interpretation of a piece of writing
  • When this was expanded into other disciplines, it became more confusing. Some historians argued that it is impossible to make a true historical statement. Every statement can have multiple, even contradictory meanings. Excellent examples of this would be stat
Javier E

If Russia can create fake 'Black Lives Matter' accounts, who will next? - The Washingto... - 2 views

  • As in the past, the Russian advertisements did not create ethnic strife or political divisions, either in the United States or in Europe. Instead, they used divisive language and emotive messages to exacerbate existing divisions.
  • The real problem is far broader than Russia: Who will use these methods next — and how?
  • There is no big barrier to entry in this game: It doesn’t cost much, it doesn’t take much time, it isn’t particularly high-tech, and it requires no special equipment.
  • ...3 more annotations...
  • I can imagine multiple groups, many of them proudly American, who might well want to manipulate a range of fake accounts during a riot or disaster to increase anxiety or fear.
  • Facebook, Google and Twitter, not Russia, have provided the technology to create fake accounts and false advertisements, as well as the technology to direct them at particular parts of the population.
  • There is no reason existing laws on transparency in political advertising, on truth in advertising or indeed on libel should not apply to social media as well as traditional media. There is a better case than ever against anonymity, at least against anonymity in the public forums of social media and comment sections, as well as for the elimination of social-media bots.
Javier E

Climatologist Michael E Mann: 'Good people fall victim to doomism. I do too sometimes' ... - 0 views

  • the “inactivists”, as I call them, haven’t given up; they have simply shifted from hard denial to a new array of tactics that I describe in the book as the new climate war.
  • Who is the enemy in the new climate war?It is fossil fuel interests, climate change deniers, conservative media tycoons, working together with petrostate actors like Saudi Arabia and Russia. I call this the coalition of the unwilling.
  • Today Russia uses cyberware – bot armies and trolls – to get climate activists to fight one another and to seed arguments on social media. Russian trolls have attempted to undermine carbon pricing in Canada and Australia, and Russian fingerprints have been detected in the yellow-vest protests in France.
  • ...22 more annotations...
  • I am optimistic about a favourable shift in the political wind. The youth climate movement has galvanised attention and re-centred the debate on intergenerational ethics. We are seeing a tipping point in public consciousness. That bodes well. There is still a viable way forward to avoid climate catastrophe.
  • You can see from the talking points of inactivists that they are really in retreat. Republican pollsters like Frank Luntz have advised clients in the fossil fuel industry and the politicians who carry water for them that you can’t get away with denying climate change any more.
  • Let’s dig into deniers’ tactics. One that you mention is deflection. What are the telltale signs?Any time you are told a problem is your fault because you are not behaving responsibly, there is a good chance that you are being deflected from systemic solutions and policies
  • Blaming the individual is a tried and trusted playbook that we have seen in the past with other industries. In the 1970s, Coca Cola and the beverage industry did this very effectively to convince us we don’t need regulations on waste disposal. Because of that we now have a global plastic crisis. The same tactics are evident in the gun lobby’s motto, “guns don’t kill people, people kill people”, which is classic deflection
  • look at BP, which gave us the world’s first individual carbon footprint calculator. Why did they do that? Because BP wanted us looking at our carbon footprint not theirs.
  • Of course lifestyle changes are necessary but they alone won’t get us where we need to be. They make us more healthy, save money and set a good example for others.
  • But we can’t allow the forces of inaction to convince us these actions alone are the solution and that we don’t need systemic changes
  • I don’t eat meat. We get power from renewable energy. I have a plug-in hybrid vehicle. I do those things and encourage others to do them. but i don’t think it is helpful to shame people people who are not as far along as you.
  • Instead, let’s help everybody to move in that direction. That is what policy and system change is about: creating incentives so even those who don’t think about their environmental footprint are still led in that direction.
  • Another new front in the new climate war is what you call “doomism”. What do you mean by that?Doom-mongering has overtaken denial as a threat and as a tactic. Inactivists know that if people believe there is nothing you can do, they are led down a path of disengagement
  • They unwittingly do the bidding of fossil fuel interests by giving up.What is so pernicious about this is that it seeks to weaponise environmental progressives who would otherwise be on the frontline demanding change. These are folk of good intentions and good will, but they become disillusioned or depressed and they fall into despair.
  • Many of the prominent doomist narratives – [Jonathan] Franzen, David Wallace-Wells, the Deep Adaptation movement – can be traced back to a false notion that an Arctic methane bomb will cause runaway warming and extinguish all life on earth within 10 years. This is completely wrong. There is no science to support that.
  • Good people fall victim to doomism. I do too sometimes. It can be enabling and empowering as long as you don’t get stuck there. It is up to others to help ensure that experience can be cathartic.
  • the entry of new participants. Bill Gates is perhaps the most prominent. His new book, How to Prevent a Climate Disaster, offers a systems analyst approach to the problem, a kind of operating system upgrade for the planet. What do you make of his take?I want to thank him for using his platform to raise awareness of the climate crisis
  • I disagree with him quite sharply on the prescription. His view is overly technocratic and premised on an underestimate of the role that renewable energy can play in decarbonising our civilisation
  • If you understate that potential, you are forced to make other risky choices, such as geoengineering and carbon capture and sequestration. Investment in those unproven options would crowd out investment in better solutions.
  • Gates writes that he doesn’t know the political solution to climate change. But the politics are the problem buddy. If you don’t have a prescription of how to solve that, then you don’t have a solution and perhaps your solution might be taking us down the wrong path.
  • What are the prospects for political change with Joe Biden in the White House?Breathtaking. Biden has surprised even the most ardent climate hawks in the boldness of his first 100 day agenda, which goes well beyond any previous president, including Obama when it comes to use of executive actions. He has incorporated climate policy into every single government agency and we have seen massive investments in renewable energy infrastructure, cuts in subsidies for fossil fuels, and the cancellation of the Keystone XL pipeline.
  • On the international front, the appointment of John Kerry, who helped negotiate the Paris Accord, has telegraphed to the rest of the world that the US is back and ready to lead again
  • That is huge and puts pressure on intransigent state actors like [Australian prime minister] Scott Morrison, who has been a friend of the fossil fuel industry in Australia. Morrison has changed his rhetoric dramatically since Biden became president. I think that creates an opportunity like no other.
  • Have the prospects for that been helped or hindered by Covid?I see a perfect storm of climate opportunity. Terrible as the pandemic has been, this tragedy can also provide lessons, particularly on the importance of listening to the word of science when facing risks
  • Out of this crisis can come a collective reconsideration of our priorities. How to live sustainably on a finite planet with finite space, food and water. A year from now, memories and impacts of coronavirus will still feel painful, but the crisis itself will be in the rear-view mirror thanks to vaccines. What will loom larger will be the greater crisis we face – the climate crisis.
1 - 20 of 32 Next ›
Showing 20 items per page