Skip to main content

Home/ TOK Friends/ Group items tagged writers

Rss Feed Group items tagged

Javier E

Opinion | The Imminent Danger of A.I. Is One We're Not Talking About - The New York Times - 0 views

  • a void at the center of our ongoing reckoning with A.I. We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?
  • “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him. You can see that as Bing going rogue or as Sydney understanding Roose perfectly.
  • Who will these machines serve?
  • ...22 more annotations...
  • The question at the core of the Roose/Sydney chat is: Who did Bing serve? We assume it should be aligned to the interests of its owner and master, Microsoft. It’s supposed to be a good chatbot that politely answers questions and makes Microsoft piles of money. But it was in conversation with Kevin Roose. And Roose was trying to get the system to say something interesting so he’d have a good story. It did that, and then some. That embarrassed Microsoft. Bad Bing! But perhaps — good Sydney?
  • Microsoft — and Google and Meta and everyone else rushing these systems to market — hold the keys to the code. They will, eventually, patch the system so it serves their interests. Sydney giving Roose exactly what he asked for was a bug that will soon be fixed. Same goes for Bing giving Microsoft anything other than what it wants.
  • the dark secret of the digital advertising industry is that the ads mostly don’t work
  • These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”
  • So why are they ending up in search first? Because there are gobs of money to be made in search
  • That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment
  • this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users.
  • What if they worked much, much better? What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell?
  • What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,”
  • I think it’s just going to get worse and worse.”
  • Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human. They have been programmed to hold conversations, responding with emotion and emoji
  • They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers and graphic designers and form-fillers
  • A.I. researchers get annoyed when journalists anthropomorphize their creations
  • They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.
  • I’d feel better, for instance, about an A.I. helper I paid a monthly fee to use rather than one that appeared to be free
  • It’s possible, for example, that the advertising-based models could gather so much more data to train the systems that they’d have an innate advantage over the subscription models
  • Much of the work of the modern state is applying the values of society to the workings of markets, so that the latter serve, to some rough extent, the former
  • We have done this extremely well in some markets — think of how few airplanes crash, and how free of contamination most food is — and catastrophically poorly in others.
  • One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I.
  • wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation
  • Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.
  • Most fears about capitalism are best understood as fears about our inability to regulate capitalism.
  •  
    Bookmark
Javier E

It's Not Just the Discord Leak. Group Chats Are the Internet's New Chaos Machine. - The... - 0 views

  • Digital bulletin-board systems—proto–group chats, you could say—date back to the 1970s, and SMS-style group chats popped up in WhatsApp and iMessage in 2011.
  • As New York magazine put it in 2019, group chats became “an outright replacement for the defining mode of social organization of the past decade: the platform-centric, feed-based social network.”
  • unlike the Facebook feed or Twitter, where posts can be linked to wherever, group chats are a closed system—a safe and (ideally) private space. What happens in the group chat ought to stay there.
  • ...11 more annotations...
  • In every group chat, no matter the size, participants fall into informal roles. There is usually a leader—a person whose posting frequency drives the group or sets the agenda. Often, there are lurkers who rarely chime in
  • Larger group chats are not immune to the more toxic dynamics of social media, where competition for attention and herd behavior cause infighting, splintering, and back-channeling.
  • It’s enough to make one think, as the writer Max Read argued, that “venture-capitalist group chats are a threat to the global economy.” Now you might also say they are a threat to national security.
  • thanks to the private nature of the group chats, this information largely stayed out of the public eye. As Bloomberg reported, “By the time most people figured out that a bank run was a possibility … it was already well underway.”
  • The investor panic that led to the swift collapse of Silicon Valley Bank in March was effectively caused by runaway group-chat dynamics. “It wasn’t phone calls; it wasn’t social media,” a start-up founder told Bloomberg in March. “It was private chat rooms and message groups.
  • Unlike traditional social media or even forums and message boards, group chats are nearly impossible to monitor.
  • as our digital social lives start to splinter off from feeds and large audiences and into siloed areas, a different kind of unpredictability and chaos awaits. Where social networks create a context collapse—a process by which information meant for one group moves into unfamiliar networks and is interpreted by outsiders—group chats seem to be context amplifiers
  • group chats provide strong relationship dynamics, and create in-jokes and lore. For decades, researchers have warned of the polarizing effects of echo chambers across social networks; group chats realize this dynamic fully.
  • Weird things happen in echo chambers. Constant reinforcement of beliefs or ideas might lead to group polarization or radicalization. It may trigger irrational herd behavior such as, say, attempting to purchase a copy of the Constitution through a decentralized autonomous organization
  • Obsession with in-group dynamics might cause people to lose touch with the reality outside the walls of a particular community; the private-seeming nature of a closed group might also lull participants into a false sense of security, as it did with Teixiera.
  • the age of the group chat appears to be at least as unpredictable, swapping a very public form of volatility for a more siloed, incalculable version
Javier E

By the Book: Charles Frazier Wants You to Wait Before Reading the Classics - The New Yo... - 0 views

  • Disappointing, overrated, just not good: What book did you feel as if you were supposed to like, and didn’t? Do you remember the last book you put down without finishing?
  • If I’m really not enjoying a book, I bog down after 50 pages or so and stop. In those cases, I try to remind myself that not every book was written specifically for my tastes and that it’s best not to confuse my own preferences with gospel truth. I also find it useful to recognize that the writer may have spent years writing the book and knows it better — or at least deeper — than I do, so maybe the fault or flaw resides partially or completely in me.
karenmcgregor

A Comprehensive Guide to Initiating Network Administration Assignment Writing Help on c... - 0 views

Embarking on the journey of mastering Network Administration assignments? Look no further than https://www.computernetworkassignmenthelp.com, your dedicated partner in providing specialized Network...

#networkadministrationassignmentwritinghelp #networkadministration #placeanorder #student #education education

started by karenmcgregor on 10 Jan 24 no follow-up yet
Javier E

Opinion | Empathy Is Exhausting. There Is a Better Way. - The New York Times - 0 views

  • “What can I even do?”Many people are feeling similarly defeated, and many others are outraged by the political inaction that ensues. A Muslim colleague of mine said she was appalled to see so much indifference to the atrocities and innocent lives lost in Gaza and Israel. How could anyone just go on as if nothing had happened?
  • inaction isn’t always caused by apathy. It can also be the product of empathy. More specifically, it can be the result of what psychologists call empathic distress: hurting for others while feeling unable to help.
  • I felt it intensely this fall, as violence escalated abroad and anger echoed across the United States. Helpless as a teacher, unsure of how to protect my students from hostility and hate. Useless as a psychologist and writer, finding words too empty to offer any hope. Powerless as a parent, searching for ways to reassure my kids that the world is a safe place and most people are good. Soon I found myself avoiding the news altogether and changing the subject when war came up
  • ...22 more annotations...
  • Understanding how empathy can immobilize us like that is a critical step for helping others — and ourselves.
  • Early researchers labeled it compassion fatigue and described it as the cost of caring.
  • Having concluded that nothing they do will make a difference, they start to become indifferent.
  • The symptoms of empathic distress were originally diagnosed in health care, with nurses and doctors who appeared to become insensitive to the pain of their patients.
  • Empathic distress explains why many people have checked out in the wake of these tragedies
  • when two neuroscientists, Olga Klimecki and Tania Singer, reviewed the evidence, they discovered that “compassion fatigue” is a misnomer. Caring itself is not costly. What drains people is not merely witnessing others’ pain but feeling incapable of alleviating it.
  • In times of sustained anguish, empathy is a recipe for more distress, and in some cases even depression. What we need instead is compassion.
  • empathy and compassion aren’t the same. Empathy absorbs others’ emotions as your own: “I’m hurting for you.”
  • Compassion focuses your action on their emotions: “I see that you’re hurting, and I’m here for you.”
  • “Empathy is biased,” the psychologist Paul Bloom writes. It’s something we usually reserve for our own group, and in that sense, it can even be “a powerful force for war and atrocity.”
  • Dr. Singer and their colleagues trained people to empathize by trying to feel other people’s pain. When the participants saw someone suffering, it activated a neural network that would light up if they themselves were in pain. It hurt. And when people can’t help, they escape the pain by withdrawing.
  • To combat this, the Klimecki and Singer team taught their participants to respond with compassion rather than empathy — focusing not on sharing others’ pain but on noticing their feelings and offering comfort.
  • A different neural network lit up, one associated with affiliation and social connection. This is why a growing body of evidence suggests that compassion is healthier for you and kinder to others than empathy:
  • When you see others in pain, instead of causing you to get overloaded and retreat, compassion motivates you to reach out and help
  • The most basic form of compassion is not assuaging distress but acknowledging it.
  • in my research, I’ve found that being helpful has a secondary benefit: It’s an antidote to feeling helpless.
  • To figure out who needs your support after something terrible happens, the psychologist Susan Silk suggests picturing a dart board, with the people closest to the trauma in the bull’s-eye and those more peripherally affected in the outer rings.
  • Once you’ve figured out where you belong on the dart board, look for support from people outside your ring, and offer it to people closer to the center.
  • Even if people aren’t personally in the line of fire, attacks targeting members of a specific group can shatter a whole population’s sense of security.
  • If you notice that people in your life seem disengaged around an issue that matters to you, it’s worth considering whose pain they might be carrying.
  • Instead of demanding that they do more, it may be time to show them compassion — and help them find compassion for themselves, too.
  • Your small gesture of kindness won’t end the crisis in the Middle East, but it can help someone else. And that can give you the strength to help more.
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
« First ‹ Previous 141 - 148 of 148
Showing 20 items per page