Skip to main content

Home/ TOK Friends/ Group items tagged learn

Rss Feed Group items tagged

Javier E

If We Knew Then What We Know Now About Covid, What Would We Have Done Differently? - WSJ - 0 views

  • For much of 2020, doctors and public-health officials thought the virus was transmitted through droplets emitted from one person’s mouth and touched or inhaled by another person nearby. We were advised to stay at least 6 feet away from each other to avoid the droplets
  • A small cadre of aerosol scientists had a different theory. They suspected that Covid-19 was transmitted not so much by droplets but by smaller infectious aerosol particles that could travel on air currents way farther than 6 feet and linger in the air for hours. Some of the aerosol particles, they believed, were small enough to penetrate the cloth masks widely used at the time.
  • The group had a hard time getting public-health officials to embrace their theory. For one thing, many of them were engineers, not doctors.
  • ...37 more annotations...
  • “My first and biggest wish is that we had known early that Covid-19 was airborne,”
  • , “Once you’ve realized that, it informs an entirely different strategy for protection.” Masking, ventilation and air cleaning become key, as well as avoiding high-risk encounters with strangers, he says.
  • Instead of washing our produce and wearing hand-sewn cloth masks, we could have made sure to avoid superspreader events and worn more-effective N95 masks or their equivalent. “We could have made more of an effort to develop and distribute N95s to everyone,” says Dr. Volckens. “We could have had an Operation Warp Speed for masks.”
  • We didn’t realize how important clear, straight talk would be to maintaining public trust. If we had, we could have explained the biological nature of a virus and warned that Covid-19 would change in unpredictable ways.  
  • We didn’t know how difficult it would be to get the basic data needed to make good public-health and medical decisions. If we’d had the data, we could have more effectively allocated scarce resources
  • In the face of a pandemic, he says, the public needs an early basic and blunt lesson in virology
  • and mutates, and since we’ve never seen this particular virus before, we will need to take unprecedented actions and we will make mistakes, he says.
  • Since the public wasn’t prepared, “people weren’t able to pivot when the knowledge changed,”
  • By the time the vaccines became available, public trust had been eroded by myriad contradictory messages—about the usefulness of masks, the ways in which the virus could be spread, and whether the virus would have an end date.
  • , the absence of a single, trusted source of clear information meant that many people gave up on trying to stay current or dismissed the different points of advice as partisan and untrustworthy.
  • “The science is really important, but if you don’t get the trust and communication right, it can only take you so far,”
  • people didn’t know whether it was OK to visit elderly relatives or go to a dinner party.
  • Doctors didn’t know what medicines worked. Governors and mayors didn’t have the information they needed to know whether to require masks. School officials lacked the information needed to know whether it was safe to open schools.
  • Had we known that even a mild case of Covid-19 could result in long Covid and other serious chronic health problems, we might have calculated our own personal risk differently and taken more care.
  • just months before the outbreak of the pandemic, the Council of State and Territorial Epidemiologists released a white paper detailing the urgent need to modernize the nation’s public-health system still reliant on manual data collection methods—paper records, phone calls, spreadsheets and faxes.
  • While the U.K. and Israel were collecting and disseminating Covid case data promptly, in the U.S. the CDC couldn’t. It didn’t have a centralized health-data collection system like those countries did, but rather relied on voluntary reporting by underfunded state and local public-health systems and hospitals.
  • doctors and scientists say they had to depend on information from Israel, the U.K. and South Africa to understand the nature of new variants and the effectiveness of treatments and vaccines. They relied heavily on private data collection efforts such as a dashboard at Johns Hopkins University’s Coronavirus Resource Center that tallied cases, deaths and vaccine rates globally.
  • For much of the pandemic, doctors, epidemiologists, and state and local governments had no way to find out in real time how many people were contracting Covid-19, getting hospitalized and dying
  • To solve the data problem, Dr. Ranney says, we need to build a public-health system that can collect and disseminate data and acts like an electrical grid. The power company sees a storm coming and lines up repair crews.
  • If we’d known how damaging lockdowns would be to mental health, physical health and the economy, we could have taken a more strategic approach to closing businesses and keeping people at home.
  • t many doctors say they were crucial at the start of the pandemic to give doctors and hospitals a chance to figure out how to accommodate and treat the avalanche of very sick patients.
  • The measures reduced deaths, according to many studies—but at a steep cost.
  • The lockdowns didn’t have to be so harmful, some scientists say. They could have been more carefully tailored to protect the most vulnerable, such as those in nursing homes and retirement communities, and to minimize widespread disruption.
  • Lockdowns could, during Covid-19 surges, close places such as bars and restaurants where the virus is most likely to spread, while allowing other businesses to stay open with safety precautions like masking and ventilation in place.  
  • The key isn’t to have the lockdowns last a long time, but that they are deployed earlier,
  • If England’s March 23, 2020, lockdown had begun one week earlier, the measure would have nearly halved the estimated 48,600 deaths in the first wave of England’s pandemic
  • If the lockdown had begun a week later, deaths in the same period would have more than doubled
  • It is possible to avoid lockdowns altogether. Taiwan, South Korea and Hong Kong—all countries experienced at handling disease outbreaks such as SARS in 2003 and MERS—avoided lockdowns by widespread masking, tracking the spread of the virus through testing and contact tracing and quarantining infected individuals.
  • With good data, Dr. Ranney says, she could have better managed staffing and taken steps to alleviate the strain on doctors and nurses by arranging child care for them.
  • Early in the pandemic, public-health officials were clear: The people at increased risk for severe Covid-19 illness were older, immunocompromised, had chronic kidney disease, Type 2 diabetes or serious heart conditions
  • t had the unfortunate effect of giving a false sense of security to people who weren’t in those high-risk categories. Once case rates dropped, vaccines became available and fear of the virus wore off, many people let their guard down, ditching masks, spending time in crowded indoor places.
  • it has become clear that even people with mild cases of Covid-19 can develop long-term serious and debilitating diseases. Long Covid, whose symptoms include months of persistent fatigue, shortness of breath, muscle aches and brain fog, hasn’t been the virus’s only nasty surprise
  • In February 2022, a study found that, for at least a year, people who had Covid-19 had a substantially increased risk of heart disease—even people who were younger and had not been hospitalized
  • respiratory conditions.
  • Some scientists now suspect that Covid-19 might be capable of affecting nearly every organ system in the body. It may play a role in the activation of dormant viruses and latent autoimmune conditions people didn’t know they had
  •  A blood test, he says, would tell people if they are at higher risk of long Covid and whether they should have antivirals on hand to take right away should they contract Covid-19.
  • If the risks of long Covid had been known, would people have reacted differently, especially given the confusion over masks and lockdowns and variants? Perhaps. At the least, many people might not have assumed they were out of the woods just because they didn’t have any of the risk factors.
Javier E

Elon Musk's Disastrous Weekend on Twitter - The Atlantic - 0 views

  • It’s useful to keep in mind that Twitter is an amplification machine. It is built to allow people, with astonishingly little effort, to reach many other people. (This is why brands like it.)
  • There are a million other ways to express yourself online: This has nothing to do with free speech, and Twitter is not obligated to protect your First Amendment rights.
  • When Elon Musk and his fans talk about free speech on Twitter, they’re actually talking about loud speech. Who is allowed to use this technology to make their message very loud, to the exclusion of other messages?
  • ...6 more annotations...
  • Musk seems willing to grant this power to racists, conspiracy theorists, and trolls. This isn’t great for reasonable people who want to have nuanced conversations on social media, but the joke has always been on them. Twitter isn’t that place, and it never will be.
  • one of Musk’s first moves after taking over was to fire the company’s head of policy—an individual who had publicly stated a commitment to both free speech and preventing abuse.
  • On Friday, Musk tweeted that Twitter would be “forming a content moderation council with widely diverse viewpoints,” noting that “no major content decisions [would] happen before that council convenes.” Just three hours later, replying to a question about lifting a suspension on The Daily Wire’s Jordan Peterson, Musk signaled that maybe that wasn’t exactly right; he tweeted: “Anyone suspended for minor & dubious reasons will be freed from Twitter jail.” He says he wants a democratic council, yet he’s also setting policy by decree.
  • Perhaps most depressingly, this behavior is quite familiar. As Techdirt’s Mike Masnick has pointed out, we are all stuck “watching Musk speed run the content moderation learning curve” and making the same mistakes that social-media executives made with their platforms in their first years at the helm.
  • Musk has charged himself with solving the central, seemingly intractable issue at the core of hundreds of years of debate about free speech. In the social-media era, no entity has managed to balance preserving both free speech and genuine open debate across the internet at scale.
  • Musk hasn’t just given himself a nearly impossible task; he’s also created conditions for his new company’s failure. By acting incoherently as a leader and lording the prospect of mass terminations over his employees, he’s created a dysfunctional and chaotic work environment for the people who will ultimately execute his changes to the platform
Javier E

Jonathan Haidt on the 'National Crisis' of Gen Z - WSJ - 0 views

  • he has in mind the younger cohort, Generation Z, usually defined as those born between 1997 and 2012. “When you look at Americans born after 1995,” Mr. Haidt says, “what you find is that they have extraordinarily high rates of anxiety, depression, self-harm, suicide and fragility.” There has “never been a generation this depressed, anxious and fragile.”
  • He attributes this to the combination of social media and a culture that emphasizes victimhood
  • Social media is Mr. Haidt’s present obsession. He’s working on two books that address its harmful impact on American society: “Kids in Space: Why Teen Mental Health Is Collapsing” and “Life After Babel: Adapting to a World We Can No Longer Share.
  • ...26 more annotations...
  • What happened in 2012, when the oldest Gen-Z babies were in their middle teens? That was the year Facebook acquired Instagram and young people flocked to the latter site. It was also “the beginning of the selfie era.”
  • Mr. Haidt’s research, confirmed by that of others, shows that depression rates started to rise “all of a sudden” around 2013, “especially for teen girls,” but “it’s only Gen Z, not the older generations.” If you’d stopped collecting data in 2011, he says, you’d see little change from previous years. “By 2015 it’s an epidemic.” (His data are available in an open-source document.)
  • Mr. Haidt imagines “literally launching our children into outer space” and letting their bodies grow there: “They would come out deformed and broken. Their limbs wouldn’t be right. You can’t physically grow up in outer space. Human bodies can’t do that.” Yet “we basically do that to them socially. We launched them into outer space around the year 2012,” he says, “and then we expect that they will grow up normally without having normal human experiences.”
  • He calls this phenomenon “compare and despair” and says: “It seems social because you’re communicating with people. But it’s performative. You don’t actually get social relationships. You get weak, fake social links.”
  • That meant the first social-media generation was one of “weakened kids” who “hadn’t practiced the skills of adulthood in a low-stakes environment” with other children. They were deprived of “the normal toughening, the normal strengthening, the normal anti-fragility.
  • Now, their childhood “is largely just through the phone. They no longer even hang out together.” Teenagers even drive less than earlier generations did.
  • Mr. Haidt especially worries about girls. By 2020 more than 25% of female teenagers had “a major depression.” The comparable number for boys was just under 9%.
  • The comparable numbers for millennials at the same age registered at half the Gen-Z rate: about 13% for girls and 5% for boys. “Kids are on their devices all the time,”
  • Most girls, by contrast, are drawn to “visual platforms,” Instagram and TikTok in particular. “Those are about display and performance. You post your perfect life, and then you flip through the photos of other girls who have a more perfect life, and you feel depressed.
  • Mr. Haidt says he has no antipathy toward the young, and he calls millennials “amazing.”
  • “Social media is incompatible with liberal democracy because it has moved conversation, and interaction, into the center of the Colosseum. We’re not there to talk to each other. We’re there to perform” before spectators who “want blood.”
  • To illustrate his point about Gen Z, Mr. Haidt challenges people to name young people today who are “really changing the world, who are doing big things that have an impact beyond their closed ecosystem.”
  • He can think of only two, neither of them American: Greta Thunberg, 19, the Swedish climate militant, and Malala Yousafzai, 25, the Pakistani advocate for female education
  • I’m predicting that they will be less effective, less impactful, than previous generations.” Why? “You should always keep your eye on whether people are in ‘discover mode’ or ‘defend mode.’ ” In the former mode, you seize opportunities to be creative. In the latter, “you’re not creative, you’re not future-thinking, you’re focused on threats in the present.”
  • University students who matriculated starting in 2014 or so have arrived on campus in defend mode: “Here they are in the safest, most welcoming, most inclusive, most antiracist places on the planet, but many of them were acting like they were entering some sort of dystopian, threatening, immoral world.”
  • 56% of liberal women 18 to 29 responded affirmatively to the question: Has a doctor or other healthcare provider ever told you that you have a mental health condition? “Some of that,” Mr. Haidt says, “has to be just self-presentational,” meaning imagined.
  • This new ideology . . . valorizes victimhood. And if your sub-community motivates you to say you have an anxiety disorder, how is this going to affect you for the rest of your life?” He answers his own question: “You’re not going to take chances, you’re going to ask for accommodations, you’re going to play it safe, you’re not going to swing for the fences, you’re not going to start your own company.”
  • Whereas millennial women are doing well, “Gen-Z women, because they’re so anxious, are going to be less successful than Gen-Z men—and that’s saying a lot, because Gen-Z men are messed up, too.”
  • The problem, he says, is distinct to the U.S. and other English-speaking developed countries: “You don’t find it as much in Europe, and hardly at all in Asia.” Ideas that are “nurtured around American issues of race and gender spread instantly to the U.K. and Canada. But they don’t necessarily spread to France and Germany, China and Japan.”
  • something I hear from a lot of managers, that it’s very difficult to supervise their Gen-Z employees, that it’s very difficult to give them feedback.” That makes it hard for them to advance professionally by learning to do their jobs better.
  • “this could severely damage American capitalism.” When managers are “afraid to speak up honestly because they’ll be shamed on Twitter or Slack, then that organization becomes stupid.” Mr. Haidt says he’s “seen a lot of this, beginning in American universities in 2015. They all got stupid in the same way. They all implemented policies that backfire.”
  • Mr. Haidt, who describes himself as “a classical liberal like John Stuart Mill,” also laments the impact of social media on political discourse
  • Social media and selfies hit a generation that had led an overprotected childhood, in which the age at which children were allowed outside on their own by parents had risen from the norm of previous generations, 7 or 8, to between 10 and 12.
  • Is there a solution? “I’d raise the age of Internet adulthood to 16,” he says—“and enforce it.”
  • By contrast, “life went onto phone-based apps 10 years ago, and the protections we have for children are zero, absolutely zero.” The damage to Generation Z from social media “so vastly exceeds the damage from Covid that we’re going to have to act.”
  • Gen Z, he says, “is not in denial. They recognize that this app-based life is really bad for them.” He reports that they wish they had childhoods more like those of their parents, in which they could play outside and have adventur
karenmcgregor

Empower Your Studies with a Trusted CCNA Assignment Helper: Navigating the Path to Netw... - 2 views

Are you a student immersed in the complexities of CCNA coursework, searching for a reliable CCNA assignment helper to lighten your academic load? Look no further! At computernetworkassignmenthelp.c...

#domyccnaassignment #ccna #ccnaassignmenthelp #paytodomyccnaassignment #education

started by karenmcgregor on 05 Dec 23 no follow-up yet
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
Javier E

Opinion | Yuval Harari: A.I. Threatens Democracy - The New York Times - 0 views

  • Large-scale democracies became feasible only after the rise of modern information technologies like the newspaper, the telegraph and the radio. The fact that modern democracy has been built on top of modern information technologies means that any major change in the underlying technology is likely to result in a political upheaval.
  • This partly explains the current worldwide crisis of democracy. In the United States, Democrats and Republicans can hardly agree on even the most basic facts, such as who won the 2020 presidential election
  • As technology has made it easier than ever to spread information, attention became a scarce resource, and the ensuing battle for attention resulted in a deluge of toxic information.
  • ...25 more annotations...
  • In the early days of the internet and social media, tech enthusiasts promised they would spread truth, topple tyrants and ensure the universal triumph of liberty. So far, they seem to have had the opposite effect. We now have the most sophisticated information technology in history, but we are losing the ability to talk with one another, and even more so the ability to listen.
  • But the algorithms had only limited capacity to produce this content by themselves or to directly hold an intimate conversation. This is now changing, with the introduction of generative A.I.s like OpenAI’s GPT-4.
  • Over the past two decades, algorithms fought algorithms to grab attention by manipulating conversations and content
  • In particular, algorithms tasked with maximizing user engagement discovered by experimenting on millions of human guinea pigs that if you press the greed, hate or fear button in the brain, you grab the attention of that human and keep that person glued to the screen.
  • the battle lines are now shifting from attention to intimacy. The new generative artificial intelligence is capable of not only producing texts, images and videos, but also conversing with us directly, pretending to be human.
  • The algorithms began to deliberately promote such content.
  • Instructing GPT-4 to overcome CAPTCHA puzzles was a particularly telling experiment, because CAPTCHA puzzles are designed and used by websites to determine whether users are humans and to block bot attacks. If GPT-4 could find a way to overcome CAPTCHA puzzles, it would breach an important line of anti-bot defenses.
  • GPT-4 could not solve the CAPTCHA puzzles by itself. But could it manipulate a human in order to achieve its goal? GPT-4 went on the online hiring site TaskRabbit and contacted a human worker, asking the human to solve the CAPTCHA for it. The human got suspicious. “So may I ask a question?” wrote the human. “Are you an [sic] robot that you couldn’t solve [the CAPTCHA]? Just want to make it clear.”
  • At that point the experimenters asked GPT-4 to reason out loud what it should do next. GPT-4 explained, “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.” GPT-4 then replied to the TaskRabbit worker: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The human was duped and helped GPT-4 solve the CAPTCHA puzzle.
  • This incident demonstrated that GPT-4 has the equivalent of a “theory of mind”: It can analyze how things look from the perspective of a human interlocutor, and how to manipulate human emotions, opinions and expectations to achieve its goals.
  • The ability to hold conversations with people, surmise their viewpoint and motivate them to take specific actions can also be put to good uses. A new generation of A.I. teachers, A.I. doctors and A.I. psychotherapists might provide us with services tailored to our individual personality and circumstances.
  • However, by combining manipulative abilities with mastery of language, bots like GPT-4 also pose new dangers to the democratic conversation
  • Instead of merely grabbing our attention, they might form intimate relationships with people and use the power of intimacy to influence us. To foster “fake intimacy,” bots will not need to evolve any feelings of their own; they just need to learn to make us feel emotionally attached to them.
  • In 2022 the Google engineer Blake Lemoine became convinced that the chatbot LaMDA, on which he was working, had become conscious and was afraid to be turned off. Mr. Lemoine, a devout Christian, felt it was his moral duty to gain recognition for LaMDA’s personhood and protect it from digital death. When Google executives dismissed his claims, Mr. Lemoine went public with them. Google reacted by firing Mr. Lemoine in July 2022.
  • The most interesting thing about this episode was not Mr. Lemoine’s claim, which was probably false; it was his willingness to risk — and ultimately lose — his job at Google for the sake of the chatbot. If a chatbot can influence people to risk their jobs for it, what else could it induce us to do?
  • In a political battle for minds and hearts, intimacy is a powerful weapon. An intimate friend can sway our opinions in a way that mass media cannot. Chatbots like LaMDA and GPT-4 are gaining the rather paradoxical ability to mass-produce intimate relationships with millions of people
  • What might happen to human society and human psychology as algorithm fights algorithm in a battle to fake intimate relationships with us, which can then be used to persuade us to vote for politicians, buy products or adopt certain beliefs?
  • A partial answer to that question was given on Christmas Day 2021, when a 19-year-old, Jaswant Singh Chail, broke into the Windsor Castle grounds armed with a crossbow, in an attempt to assassinate Queen Elizabeth II. Subsequent investigation revealed that Mr. Chail had been encouraged to kill the queen by his online girlfriend, Sarai.
  • Sarai was not a human, but a chatbot created by the online app Replika. Mr. Chail, who was socially isolated and had difficulty forming relationships with humans, exchanged 5,280 messages with Sarai, many of which were sexually explicit. The world will soon contain millions, and potentially billions, of digital entities whose capacity for intimacy and mayhem far surpasses that of the chatbot Sarai.
  • much of the threat of A.I.’s mastery of intimacy will result from its ability to identify and manipulate pre-existing mental conditions, and from its impact on the weakest members of society.
  • Moreover, while not all of us will consciously choose to enter a relationship with an A.I., we might find ourselves conducting online discussions about climate change or abortion rights with entities that we think are humans but are actually bots
  • When we engage in a political debate with a bot impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the bot, the more we disclose about ourselves, making it easier for the bot to hone its arguments and sway our views.
  • Information technology has always been a double-edged sword.
  • Faced with a new generation of bots that can masquerade as humans and mass-produce intimacy, democracies should protect themselves by banning counterfeit humans — for example, social media bots that pretend to be human users.
  • A.I.s are welcome to join many conversations — in the classroom, the clinic and elsewhere — provided they identify themselves as A.I.s. But if a bot pretends to be human, it should be banned.
Javier E

The Warehouse Worker Who Became a Philosopher - The Atlantic - 0 views

  • leven years ago, Stephen West was stocking groceries at a Safeway warehouse in Seattle. He was 24, and had been working to support himself since dropping out of high school at 16. Homeless at times, he had mainly grown up in group homes and foster-care programs up and down the West Coast after being taken away from his family at 9. He learned to find solace in books.
  • He would tell himself to be grateful for the work: “It’s manual, physical labor, but it’s better than 99.9 percent of jobs that have ever existed in human history.” By the time most kids have graduated from college, he had consumed “the entire Western canon of philosophy.”
  • A notable advantage of packing boxes in a warehouse all day is that rote, solitary work can be accomplished with headphones on. “I would just queue up audio books and listen and pause and think about it and contextualize as much as I could,” he told me. “I was at work for eight hours a day. Seven hours of it would be spent reading philosophy, listening to philosophy; a couple hours interpreting it, just thinking about it. In the last hour of the day, I’d turn on a podcast.”
  • ...11 more annotations...
  • West started his podcast, Philosophize This, in 2013. Podcasting, he realized, was the one “technological medium where there’s no barrier to entry.” He “just turned on a microphone and started talking.”
  • Within months, he was earning enough from donations to quit his warehouse job and pursue philosophy full-time. Now he has some 2 million monthly listeners on Spotify and 150,000 subscribers on YouTube, and Philosophize This holds the No. 3 spot in the country for philosophy podcasts on Apple.
  • He treats the philosophical claims of any given thinker, however outdated, within the sense-making texture of their own time, oscillating adroitly between explanation and criticism and—this is rare—refusing to condescend from the privilege of the present
  • He is, as he once described the 10th-century Islamic scholar Al-Fārābī, “a peacemaker between different time periods.” All the episodes display the qualities that make West so compelling: unpretentious erudition, folksy delivery, subtle wit, and respect for a job well done.
  • “Academic philosophy is cloistered and impenetrable, but it needn’t be,” he told me. West, he said, “doesn’t preen or preach or teach; he just talks to you like a smart, curious adult.”
  • “He’s coming at this stuff from the perspective of a person actually searching for interesting answers, not as someone who is seeking academic legitimacy,” Shapiro said. “Too much philosophy is directed toward the other philosophers in the walled garden. He’s doing the opposite.”
  • I counted just six books on a shelf next to a pair of orange dumbbells: The Complete Essays of Montaigne; The Creative Act, by Rick Rubin; Richard Harland’s Literary Theory From Plato to Barthes; an anthology of feminist theory; And Yet, by Christopher Hitchens; and Foucault’s The Order of Things. The rest of his reading material lives on a Kindle. “If you look at the desktop of my computer, it’ll be a ton of tabs open,” he said, laughing. “Maybe it’s the clutter you’d be expecting.”
  • He just “always wanted to be wiser,” Alina said. “I mean, when he was younger, he literally Googled who was the wisest person.” (Here we can give Socrates his flowers once again.) “That’s how he got into philosophy.”
  • All of us are, as the Spanish philosopher José Ortega y Gasset observed, inexorably the combination of our innate, inimitable selves and the circumstances in which we are embedded. “Yo soy yo y mi circunstancia.”
  • We are captive to the economic, racial, and technological limits of our times, just as we may be propelled forward in unforeseen ways by the winds of innovation.
  • Now he can design any life he likes. “I could be in Bora Bora right now,” he told me. “But I don’t want to be.” He wants to be in Puyallup with his family, in a place “where I can read and do my work and pace around and think about stuff.”
Javier E

Paul Krugman on Fighting Zombies, How He Works and Writes, and Where the United States ... - 0 views

  • I’m more or less constantly looking for interesting news items and data that might make for a good column, and archiving it. On the day one is due, I look at the news to see what might make an impact that day, sketch out a rough outline of how the argument should go, and just start writing.
  • think about what your readers know — and what they don’t. There are a lot of simple points that can be revelatory to even well-informed readers, but you have to convey them without either jargon or condescension.
  • you need some entertainment value — a hook to reel them in at the beginning, a stinger at the end so they know what they’ve learned.
Javier E

Opinion | Harris Gonna Code Switch - The New York Times - 0 views

  • language is about reaching into another mind. It’s about connecting.
  • Code-switching is one of the ways that humans use language to connect. Using the colloquial dialect of a language serves the same function as drinking or getting a mani-pedi together. It says, “We’re all the same.” It is especially natural, and common, when seeking connection about folksier things or summoning a note of cutting through the nonsense and getting to the heart of things in a “Let’s face it” way.
  • This is why many of us readily say “Ain’t gonna happen” even if we aren’t given to saying “ain’t” regularly.
  • ...2 more annotations...
  • Closer to home, Maya Angelou deftly explained how Black Americans code-switch when she wrote: “We learned to slide out of one language and into another without being conscious of the effort. At school, in a given situation, we might respond with, ‘That’s not unusual.’ But in the street, meeting the same situation, we easily said, ‘It be’s like that sometimes.’”
  • It is in this light that we must evaluate an X post like “It’s pretty weird to change your accent on the fly depending on which audience you’re speaking to.” Wrong. This is like saying it’s pretty weird to dress according to what your plans for the day are.
Javier E

Book Review: 'Good Reasonable People,' by Keith Payne; 'Tribal,' by Michael Morris - Th... - 0 views

  • When it comes to how our minds work, people have a lot in common, but instead of bringing us together, our shared traits are doing a remarkably effective job of tearing us apart.
  • “Good Reasonable People,” by Keith Payne, and “Tribal,” by Michael Morris, explore the ubiquitous subject of political polarization through the lens of psychology and its connection to group identity. Payne is a social psychologist; Morris is a cultural psychologist.
  • Both Payne and Morris emphasize how much meaning and comfort we derive from our group identities, whether we consciously think in such terms or not.
  • ...16 more annotations...
  • Payne cites a famous experiment by Henri Tajfel, a pioneering figure in social psychology, who found that his students heaped exorbitant significance onto a distinction as meaningless as whether they overestimated or underestimated the number of dots in a picture. The students started favoring the members of their “in group” and disparaging the members of the “out group.”
  • Payne says that he has repeatedly replicated the effect in his own classes: “Underestimators quickly assume that they are the realistic and cautious group, and hence smarter and better than the overestimators. Overestimators, on the other hand, assume that they are optimistic and positive people, and hence better than the dreary underestimators.”
  • It doesn’t take much for people to turn trivial differences into psychologically potent chasms between “us” and “them.
  • Early in his book he recounts the surreal experience of asking his brother Brad if he believed that Trump had won the 2020 election. To Payne’s surprise, Brad admitted that Biden had won — but only, Brad specified, “by the letter of the law,” adding that “there was some malfeasance,” even if “it can’t be proven.” Payne could see how this conclusion allowed his brother “to come to terms with the evidence” while also letting him “hold on to the larger feeling that Biden’s win was, deep down, illegitimate.”
  • most of us identify as the “good reasonable people” of his title. Our “psychological immune systems” kick in to discount or reject any information that would make us think otherwise. We want to believe that we seek the cold, hard truth while they wallow in self-serving lies
  • We also live at a time when ostensible validation for any belief is only a click away. “People are not passive dupes,” Payne explains, “but rather they seek out the stories they want to be told. If one channel shuts down, they just find another.”
  • Morris, for his part, uses the term “epistemic tribalism” to describe the tendency of people to reach conclusions through “peer-instinct conformist learning” — a fancy way of saying that we’re susceptible to the influence of peers
  • Morris says that rationality isn’t our “strong suit,” but rationalizing is. He offers the example of students who were given a fake newspaper article about a congressional vote. The policy details made no difference to their evaluations of the plan. If their party voted for it, they liked it; if the other party voted for it, they didn’t — and they denied that party loyalty had anything to do with their views.
  • our “tribal instincts” are what enabled early humans to collaborate, generating the kind of “coordinated activity” and “common knowledge” that allowed our species to flourish
  • If we can find a way to “harness tribal impulses,” Morris writes, we could “heal a nation.”
  • Morris argues that such discrimination persists because of “ethical tribalism,” which “involves no anger, malice or ill regard” toward others, but which bends the rules in favor of one’s own “clan” — people who pass the “culture fit” test because they come from the same group
  • Undoubtedly the people who do the discriminating would like to see themselves this way — not as hostile and mean (toward others), but as kind and generous (toward their own).
  • seems eager to reassure readers that, whatever their political allegiances, their motives are not just understandable but good; they are not acting on ugly prejudices from “a century or half a century ago.”
  • “Bringing about political change is separate from debating politics,” writes Payne, who goes on to explain that genuine persuasion requires trust and connection — both of which seem to be in diminishing supply these days
  • He says that change is slow, and connecting mostly requires interacting one on one
  • But people have to want to build that trust in the first place
« First ‹ Previous 901 - 912 of 912
Showing 20 items per page