Skip to main content

Home/ TOK Friends/ Group items matching "heart" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

Yuval Noah Harari paints a grim picture of the AI age, roots for safety checks | Technology News,The Indian Express - 0 views

  • Yuval Noah Harari, known for the acclaimed non-fiction book Sapiens: A Brief History of Mankind, in his latest article in The Economist, has said that artificial intelligence has “hacked” the operating system of human civilization
  • he said that the newly emerged AI tools in recent years could threaten the survival of human civilization from an “unexpected direction.”
  • He demonstrated how AI could impact culture by talking about language, which is integral to human culture. “Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artifacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artifacts we created by inventing myths and writing scriptures,” wrote Harari.
  • ...8 more annotations...
  • He stated that democracy is also a language that dwells on meaningful conversations, and when AI hacks language it could also destroy democracy.
  • The 47-year-old wrote that the biggest challenge of the AI age was not the creation of intelligent tools but striking a collaboration between humans and machines.
  • To highlight the extent of how AI-driven misinformation can change the course of events, Harari touched upon the cult QAnon, a political movement affiliated with the far-right in the US. QAnon disseminated misinformation via “Q drops” that were seen as sacred by followers.
  • Harari also shed light on how AI could form intimate relationships with people and influence their decisions. “Through its mastery of language, AI could even form intimate relationships with people and use the power of intimacy to change our opinions and worldviews,” he wrote. To demonstrate this, he cited the example of Blake Lemoine, a Google engineer who lost his job after publicly claiming that the AI chatbot LaMDA had become sentient. According to the historian, the controversial claim cost Lemoine his job. He asked if AI can influence people to risk their jobs, what else could it induce them to do?
  • Harari also said that intimacy was an effective weapon in the political battle of minds and hearts. He said that in the past few years, social media has become a battleground for controlling human attention, and the new generation of AI can convince people to vote for a particular politician or buy a certain product.
  • In his bid to call attention to the need to regulate AI technology, Harari said that the first regulation should be to make it mandatory for AI to disclose that it is an AI. He said it was important to put a halt on ‘irresponsible deployment’ of AI tools in the public domain, and regulating it before it regulates us.
  • The author also shed light on the fact that how the current social and political systems are incapable of dealing with the challenges posed by AI. Harari emphasised the need to have an ethical framework to respond to challenges posed by AI.
  • He argued that while GPT-3 had made remarkable progress, it was far from replacing human interactions
Javier E

CarynAI, created with GPT-4 technology, will be your girlfriend - The Washington Post - 0 views

  • CarynAI also shows how AI applications can increase the ability of a single person to reach an audience of thousands in a way that, for users, may feel distinctly personal.
  • The impact could be enormous for someone forming something resembling a personal relationship with thousands or millions of online followers. It could also show how thin and tenuous these simulations of human connection could become.
  • CarynAI also is a reminder that sex and romance are often the first realm in which technological progress becomes profitable. Marjorie acknowledges that some of the exchanges with CarynAI become sexually explicit
  • ...11 more annotations...
  • CarynAI is the first major release from a company called Forever Voices. The company previously has created realistic AI chatbots that allow users to talk with replicated versions of Steve Jobs, Kanye West, Donald Trump and Taylor Swift
  • CarynAI is a far more sophisticated product, the company says, and part of Forever Voices’ new AI companion initiative, meant to provide users with a girlfriend-like experience that fans can emotionally bond with.
  • John Meyer, CEO and founder of Forever Voices, said that he created the company last year, after trying to use AI to develop ways to reconnect with his late father, who passed away in 2017. He built an AI voice chatbot that replicated his late father’s voice and personality to talk to and found the experience incredibly healing. “It was a remarkable experience to talk to him again in a super realistic way,” Meyer said. “I’ve been in tech my whole life, I’m a programmer, so it was easy for me to start building something like that especially as things got more advanced with the AI space.”
  • Meyer’s company has about 10 employees. One job Meyer is hoping to fill soon is chief ethics officer. “There are a lot of ways to do this wrong,”
  • One safeguard is trying to limit the amount of time a user is allowed to chat with CarynAI. To keep users from becoming addicted, CarynAI is programmed to wind down conversations after about an hour, encouraging users to pick back up later. But there is no hard time limit on use, and some users are spending hours speaking to CarynAI per day, according to Marjorie’s manager, Ishan Goel.
  • “I consider myself a futurist at heart and when I look into the future I believe this is the beginning of a very diverse future consisting of AI to human companionship,”
  • Elizabeth Snower, founder of ICONIQ, which creates conversational 3D avatars, predicts that soon there will be “AI influencers on every social platform that are influencing consumer decisions.”
  • “A lot of people have just been kind of really mad at the existence of this. They think that it’s the end of humanity,” she said.
  • Marjorie hopes the backlash will fade when other online personalities begin rolling out their own AI companions
  • “I think in the next five years, most Americans will have an AI companion in their pocket in some way, shape or form, whether it’s an ultra flirty AI that you’re dating, an AI that’s your personal trainer, or simply a tutor companion. Those are all things that we are building internally,
  • That strikes AI adviser and investor Allie K. Miller as a likely outcome. “I can imagine a future in which everyone — celebrities, TV characters, influencers, your brother — has an online avatar that they invite their audience or friends to engage with. … With the accessibility of these models, I’m not surprised it’s expanding to scaled interpersonal relationships.”
Javier E

Among the Disrupted - The New York Times - 0 views

  • even as technologism, which is not the same as technology, asserts itself over more and more precincts of human life, so too does scientism, which is not the same as science.
  • The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university,
  • So, too, does the view that the strongest defense of the humanities lies not in the appeal to their utility — that literature majors may find good jobs, that theaters may economically revitalize neighborhoods
  • ...27 more annotations...
  • The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy.
  • Greif’s book is a prehistory of our predicament, of our own “crisis of man.” (The “man” is archaic, the “crisis” is not.) It recognizes that the intellectual history of modernity may be written in part as the epic tale of a series of rebellions against humanism
  • We are not becoming transhumanists, obviously. We are too singular for the Singularity. But are we becoming posthumanists?
  • In American culture right now, as I say, the worldview that is ascendant may be described as posthumanism.
  • The posthumanism of the 1970s and 1980s was more insular, an academic affair of “theory,” an insurgency of professors; our posthumanism is a way of life, a social fate.
  • In “The Age of the Crisis of Man: Thought and Fiction in America, 1933-1973,” the gifted essayist Mark Greif, who reveals himself to be also a skillful historian of ideas, charts the history of the 20th-century reckonings with the definition of “man.
  • Here is his conclusion: “Anytime your inquiries lead you to say, ‘At this moment we must ask and decide who we fundamentally are, our solution and salvation must lie in a new picture of ourselves and humanity, this is our profound responsibility and a new opportunity’ — just stop.” Greif seems not to realize that his own book is a lasting monument to precisely such inquiry, and to its grandeur
  • “Answer, rather, the practical matters,” he counsels, in accordance with the current pragmatist orthodoxy. “Find the immediate actions necessary to achieve an aim.” But before an aim is achieved, should it not be justified? And the activity of justification may require a “picture of ourselves.” Don’t just stop. Think harder. Get it right.
  • — but rather in the appeal to their defiantly nonutilitarian character, so that individuals can know more than how things work, and develop their powers of discernment and judgment, their competence in matters of truth and goodness and beauty, to equip themselves adequately for the choices and the crucibles of private and public life.
  • Who has not felt superior to humanism? It is the cheapest target of all: Humanism is sentimental, flabby, bourgeois, hypocritical, complacent, middlebrow, liberal, sanctimonious, constricting and often an alibi for power
  • what is humanism? For a start, humanism is not the antithesis of religion, as Pope Francis is exquisitely demonstrating
  • The worldview takes many forms: a philosophical claim about the centrality of humankind to the universe, and about the irreducibility of the human difference to any aspect of our animality
  • Here is a humanist proposition for the age of Google: The processing of information is not the highest aim to which the human spirit can aspire, and neither is competitiveness in a global economy. The character of our society cannot be determined by engineers.
  • And posthumanism? It elects to understand the world in terms of impersonal forces and structures, and to deny the importance, and even the legitimacy, of human agency.
  • There have been humane posthumanists and there have been inhumane humanists. But the inhumanity of humanists may be refuted on the basis of their own worldview
  • the condemnation of cruelty toward “man the machine,” to borrow the old but enduring notion of an 18th-century French materialist, requires the importation of another framework of judgment. The same is true about universalism, which every critic of humanism has arraigned for its failure to live up to the promise of a perfect inclusiveness
  • there has never been a universalism that did not exclude. Yet the same is plainly the case about every particularism, which is nothing but a doctrine of exclusion; and the correction of particularism, the extension of its concept and its care, cannot be accomplished in its own name. It requires an idea from outside, an idea external to itself, a universalistic idea, a humanistic idea.
  • Asking universalism to keep faith with its own principles is a perennial activity of moral life. Asking particularism to keep faith with its own principles is asking for trouble.
  • there is no more urgent task for American intellectuals and writers than to think critically about the salience, even the tyranny, of technology in individual and collective life
  • a methodological claim about the most illuminating way to explain history and human affairs, and about the essential inability of the natural sciences to offer a satisfactory explanation; a moral claim about the priority, and the universal nature, of certain values, not least tolerance and compassion
  • “Our very mastery seems to escape our mastery,” Michel Serres has anxiously remarked. “How can we dominate our domination; how can we master our own mastery?”
  • universal accessibility is not the end of the story, it is the beginning. The humanistic methods that were practiced before digitalization will be even more urgent after digitalization, because we will need help in navigating the unprecedented welter
  • Searches for keywords will not provide contexts for keywords. Patterns that are revealed by searches will not identify their own causes and reasons
  • The new order will not relieve us of the old burdens, and the old pleasures, of erudition and interpretation.
  • Is all this — is humanism — sentimental? But sentimentality is not always a counterfeit emotion. Sometimes sentiment is warranted by reality.
  • The persistence of humanism through the centuries, in the face of formidable intellectual and social obstacles, has been owed to the truth of its representations of our complexly beating hearts, and to the guidance that it has offered, in its variegated and conflicting versions, for a soulful and sensitive existence
  • a complacent humanist is a humanist who has not read his books closely, since they teach disquiet and difficulty. In a society rife with theories and practices that flatten and shrink and chill the human subject, the humanist is the dissenter.
Javier E

AI is about to completely change how you use computers | Bill Gates - 0 views

  • Health care
  • Entertainment and shopping
  • Today, AI’s main role in healthcare is to help with administrative tasks. Abridge, Nuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.
  • ...38 more annotations...
  • agents will open up many more learning opportunities.
  • Already, AI can help you pick out a new TV and recommend movies, books, shows, and podcasts. Likewise, a company I’ve invested in, recently launched Pix, which lets you ask questions (“Which Robert Redford movies would I like and where can I watch them?”) and then makes recommendations based on what you’ve liked in the past
  • Productivity
  • copilots can do a lot—such as turn a written document into a slide deck, answer questions about a spreadsheet using natural language, and summarize email threads while representing each person’s point of view.
  • before the sophisticated agents I’m describing become a reality, we need to confront a number of questions about the technology and how we’ll use it.
  • Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.
  • To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want. It will be able to write the code, design the look and feel of the app, create a logo, and publish the app to an online store
  • Agents will do even more. Having one will be like having a person dedicated to helping you with various tasks and doing them independently if you want. If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like
  • For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job.
  • Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it.
  • I don’t think any single company will dominate the agents business--there will be many different AI engines available.
  • The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
  • They’ll replace word processors, spreadsheets, and other productivity apps.
  • Education
  • For example, few families can pay for a tutor who works one-on-one with a student to supplement their classroom work. If agents can capture what makes a tutor effective, they’ll unlock this supplemental instruction for everyone who wants it. If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today’s text-based tutors.
  • your agent will be able to help you in the same way that personal assistants support executives today. If your friend just had surgery, your agent will offer to send flowers and be able to order them for you. If you tell it you’d like to catch up with your old college roommate, it will work with their agent to find a time to get together, and just before you arrive, it will remind you that their oldest child just started college at the local university.
  • To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences.
  • The current state of the art is Khanmigo, a text-based bot created by Khan Academy. It can tutor students in math, science, and the humanities—for example, it can explain the quadratic formula and create math problems to practice on. It can also help teachers do things like write lesson plans.
  • Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.
  • other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?
  • In the computing industry, we talk about platforms—the technologies that apps and services are built on. Android, iOS, and Windows are all platforms. Agents will be the next platform.
  • A shock wave in the tech industry
  • Agents won’t simply make recommendations; they’ll help you act on them. If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision.
  • Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you
  • they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter.
  • Companies will be able to make agents available for their employees to consult directly and be part of every meeting so they can answer questions.
  • AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.
  • If the number of companies that have started working on AI just this year is any indication, there will be an exceptional amount of competition, which will make agents very inexpensive.
  • Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.
  • Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.
  • The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people
  • The ramifications for the software business and for society will be profound.
  • In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
  • You’ll also be able to get news and entertainment that’s been tailored to your interests. CurioAI, which creates a custom podcast on any subject you ask about, is a glimpse of what’s coming.
  • An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.
  • even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
  • In the distant future, agents may even force humans to face profound questions about purpose. Imagine that agents become so good that everyone can have a high quality of life without working nearly as much. In a future like that, what would people do with their time? Would anyone still want to get an education when an agent has all the answers? Can you have a safe and thriving society when most people have a lot of free time on their hands?
  • They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.
Javier E

Opinion | How Behavioral Economics Took Over America - The New York Times - 0 views

  • Some behavioral interventions do seem to lead to positive changes, such as automatically enrolling children in school free lunch programs or simplifying mortgage information for aspiring homeowners. (Whether one might call such interventions “nudges,” however, is debatable.)
  • it’s not clear we need to appeal to psychology studies to make some common-sense changes, especially since the scientific rigor of these studies is shaky at best.
  • Nudges are related to a larger area of research on “priming,” which tests how behavior changes in response to what we think about or even see without noticing
  • ...16 more annotations...
  • Behavioral economics is at the center of the so-called replication crisis, a euphemism for the uncomfortable fact that the results of a significant percentage of social science experiments can’t be reproduced in subsequent trials
  • this key result was not replicated in similar experiments, undermining confidence in a whole area of study. It’s obvious that we do associate old age and slower walking, and we probably do slow down sometimes when thinking about older people. It’s just not clear that that’s a law of the mind.
  • And these attempts to “correct” human behavior are based on tenuous science. The replication crisis doesn’t have a simple solution
  • Journals have instituted reforms like having scientists preregister their hypotheses to avoid the possibility of results being manipulated during the research. But that doesn’t change how many uncertain results are already out there, with a knock-on effect that ripples through huge segments of quantitative social scienc
  • The Johns Hopkins science historian Ruth Leys, author of a forthcoming book on priming research, points out that cognitive science is especially prone to building future studies off disputed results. Despite the replication crisis, these fields are a “train on wheels, the track is laid and almost nothing stops them,” Dr. Leys said.
  • These cases result from lax standards around data collection, which will hopefully be corrected. But they also result from strong financial incentives: the possibility of salaries, book deals and speaking and consulting fees that range into the millions. Researchers can get those prizes only if they can show “significant” findings.
  • It is no coincidence that behavioral economics, from Dr. Kahneman to today, tends to be pro-business. Science should be not just reproducible, but also free of obvious ideology.
  • Technology and modern data science have only further entrenched behavioral economics. Its findings have greatly influenced algorithm design.
  • The collection of personal data about our movements, purchases and preferences inform interventions in our behavior from the grocery store to who is arrested by the police.
  • Setting people up for safety and success and providing good default options isn’t bad in itself, but there are more sinister uses as well. After all, not everyone who wants to exploit your cognitive biases has your best interests at heart.
  • Despite all its flaws, behavioral economics continues to drive public policy, market research and the design of digital interfaces.
  • One might think that a kind of moratorium on applying such dubious science would be in order — except that enacting one would be practically impossible. These ideas are so embedded in our institutions and everyday life that a full-scale audit of the behavioral sciences would require bringing much of our society to a standstill.
  • There is no peer review for algorithms that determine entry to a stadium or access to credit. To perform even the most banal, everyday actions, you have to put implicit trust in unverified scientific results.
  • We can’t afford to defer questions about human nature, and the social and political policies that come from them, to commercialized “research” that is scientifically questionable and driven by ideology. Behavioral economics claims that humans aren’t rational.
  • That’s a philosophical claim, not a scientific one, and it should be fought out in a rigorous marketplace of ideas. Instead of unearthing real, valuable knowledge of human nature, behavioral economics gives us “one weird trick” to lose weight or quit smoking.
  • Humans may not be perfectly rational, but we can do better than the predictably irrational consequences that behavioral economics has left us with today.
Javier E

The Perks of Taking the High Road - The Atlantic - 0 views

  • hat is the point of arguing with someone who disagrees with you? Presumably, you would like them to change their mind. But that’s easier said than done
  • Research shows that changing minds, especially changing beliefs that are tied strongly to people’s identity, is extremely difficult
  • this personal attachment to beliefs encourages “competitive personal contests rather than collaborative searches for the truth.”
  • ...29 more annotations...
  • The way that people tend to argue today, particularly online, makes things worse.
  • You wouldn’t blame anyone involved for feeling as if they’re under fire, and no one is likely to change their mind when they’re being attacked.
  • odds are that neither camp is having any effect on the other; on the contrary, the attacks make opponents dig in deeper.
  • If you want a chance at changing minds, you need a new strategy: Stop using your values as a weapon, and start offering them as a gift.
  • hilosophers and social scientists have long pondered the question of why people hold different beliefs and values
  • One of the most compelling explanations comes from Moral Foundations Theory, which has been popularized by Jonathan Haidt, a social psychologist at NYU. This theory proposes that humans share a common set of “intuitive ethics,” on top of which we build different narratives and institutions—and therefore beliefs—that vary by culture, community, and even person.
  • Extensive survey-based research has revealed that almost everyone shares at least two common values: Harming others without cause is bad, and fairness is good. Other moral values are less widely shared
  • political conservatives tend to value loyalty to a group, respect for authority, and purity—typically in a bodily sense, in terms of sexuality—more than liberals do.
  • Sometimes conflict arises because one group holds a moral foundation that the other simply doesn’t feel strongly about
  • even when two groups agree on a moral foundation, they can radically disagree on how it should be expressed
  • When people fail to live up to your moral values (or your expression of them), it is easy to conclude that they are immoral people.
  • Further, if you are deeply attached to your values, this difference can feel like a threat to your identity, leading you to lash out, which won’t convince anyone who disagrees with you.
  • research shows that if you insult someone in a disagreement, the odds are that they will harden their position against yours, a phenomenon called the boomerang effect.
  • so it is with our values. If we want any chance at persuasion, we must offer them happily. A weapon is an ugly thing, designed to frighten and coerce
  • effective missionaries present their beliefs as a gift. And sharing a gift is a joyful act, even if not everyone wants it.
  • he solution to this problem requires a change in the way we see and present our own values
  • A gift is something we believe to be good for the recipient, who, we hope, may accept it voluntarily, and do so with gratitude. That requires that we present it with love, not insults and hatred.
  • 1. Don’t “other” others.
  • Go out of your way to welcome those who disagree with you as valued voices, worthy of respect and attention. There is no “them,” only “us.”
  • 2. Don’t take rejection personally.
  • just as you are not your car or your house, you are not your beliefs. Unless someone says, “I hate you because of your views,” a repudiation is personal only if you make it so
  • 3. Listen more.
  • when it comes to changing someone’s mind, listening is more powerful than talking. They conducted experiments that compared polarizing arguments with a nonjudgmental exchange of views accompanied by deep listening. The former had no effect on viewpoints, whereas the latter reliably lowered exclusionary opinions.
  • when possible, listening and asking sensitive questions almost always has a more beneficial effect than talking.
  • howing others that you can be generous with them regardless of their values can help weaken their belief attachment, and thus make them more likely to consider your point of view
  • for your values to truly be a gift, you must weaken your own belief attachment first
  • we should all promise to ourselves, “I will cultivate openness, non-discrimination, and non-attachment to views in order to transform violence, fanaticism, and dogmatism in myself and in the world.”
  • if I truly have the good of the world at heart, then I must not fall prey to the conceit of perfect knowledge, and must be willing to entertain new and better ways to serve my ultimate goal: creating a happier world
  • generosity and openness have a bigger chance of making the world better in the long run.
Javier E

How to Find Joy in Your Sisyphean Existence - The Atlantic - 0 views

  • the gods. They took their revenge by condemning Sisyphus to eternal torment in the underworld: He had to roll a huge boulder up a hill. When he reached the top, the stone would roll back down to the bottom, and he would have to start all over, on and on, forever.
  • One could even argue that all of life is Sisyphean: We eat to just get hungry again, and shower just to get dirty again, day after day, until the end.
  • Absurd, isn’t it? Albert Camus, the philosopher and father of a whole school of thought called absurdism, thought so. In his 1942 book The Myth of Sisyphus, Camus singles out Sisyphus as an icon of the absurd, noting that “his scorn of the gods, his hatred of death, and his passion for life won him that unspeakable penalty in which the whole being is exerted toward accomplishing nothing.”
  • ...24 more annotations...
  • It would be easy to conclude that an absurdist view of life rules out happiness and leads anyone with any sense to despair at her very existence. And yet in his book, Camus concludes, “One must imagine Sisyphus happy
  • this unexpected twist in Camus’ philosophy of life and happiness can help you change your perspective and see your daily struggles in a new, more equanimous way.
  • he argues that despite the hardships of this world, against all apparent odds, human beings regularly experience true happiness. People in terrible circumstances bask in love for one another. They enjoy simple diversions
  • Even Sisyphus was happy, according to Camus, because “the struggle itself toward the heights is enough to fill a man’s heart.” Simply put, he had something to keep him busy.
  • Instead of feeling desperation at the futility of life, Camus tells us to embrace its ridiculousness. It’s the only way to arrive at happiness, the most absurd emotion of all under these circumstances
  • We shouldn’t try to find some cosmic meaning in our relentless routines—getting, spending, eating, working, pushing our own little boulders up our own little hills
  • Instead, we should laugh uproariously at the fact that there is no meaning, and be happy anyway.
  • Happiness, for Camus, is an existential declaration of independence. Instead of advising “Don’t worry, be happy,” he offers a rebellious “Tell the universe to go suck eggs, be happy.”
  • If embracing the ridiculous seems impossible to you, Camus says it’s only because of your pride.
  • “Those who prefer their principles over their happiness, they refuse to be happy outside the conditions they seem to have attached to their happiness,”
  • In fact, each of us can consciously implement Camus’ absurdism in order to forge a happier life. Here are three practical ways to find joy in the ridiculous.
  • 1. Stand up to your ennui.
  • You can’t necessarily change your perception of the world, but, as I have written, you most certainly can change your response to that perception. Meet that feeling of despair with a personal motto, such as “I don’t know what everything means, but I do know I am alive right now, and I will not squander this moment
  • 2. Look for opportunities to do a little good.
  • One of the best ways to cultivate futility is by focusing on the big things you can’t control—war, natural disasters, hatred—as opposed to the little things you can.
  • Those little things include bringing a small blessing or source of relief to others.
  • if your commute to work is a soul-sucking existential nightmare, don’t ruminate on the cars stopped ahead of you. Rather, focus on making space for that poor sap stuck in the wrong lane who’s desperately trying to merge
  • 3. Be fully present.
  • Absurdity tends to sting only when we see it from the “outside”; for example, when you think about how meaningless it has been to wash the dishes every day in the past only to find them dirty again right now—and imagine the countless dish washings that the rest of your life will comprise.
  • Confronting the absurd is much more comfortable when you do so with mindfulness.
  • “While washing the dishes one should only be washing the dishes, which means that while washing the dishes one should be completely aware of the fact that one is washing the dishes.
  • When the broad sweep of life brings you horror, concentrate on this moment, and savor it. The pleasure and meaning you can find right now are real; the meaninglessness of the future is not.
  • Some mornings, I wake up seeing only boulders and can’t face pushing them once again up that hill
  • Those are the days when my old friend Camus comes in handy. Instead of despairing of the absurdity of life, I lean into it, laugh at it, and start my day in a light mood. Then I gather my beloved boulders and set out for the nearest hill.
Javier E

Elon Musk Doesn't Want Transparency on Twitter - The Atlantic - 0 views

  • , the Twitter Files do what technology critics have long done: point out a mostly intractable problem that is at the heart of our societal decision to outsource broad swaths of our political discourse and news consumption to corporate platforms whose infrastructure and design were made for viral advertising.
  • The trolling is paramount. When former Facebook CSO and Stanford Internet Observatory leader Alex Stamos asked whether Musk would consider implementing his detailed plan for “a trustworthy, neutral platform for political conversations around the world,” Musk responded, “You operate a propaganda platform.” Musk doesn’t appear to want to substantively engage on policy issues: He wants to be aggrieved.
  • it’s possible that a shred of good could come from this ordeal. Musk says Twitter is working on a feature that will allow users to see if they’ve been de-amplified, and appeal. If it comes to pass, perhaps such an initiative could give users a better understanding of their place in the moderation process. Great!
Javier E

In 2022, TV Woke Up From the American Dream - The New York Times - 0 views

  • In politics, “the American dream” has long been used aspirationally, to evoke family and home. But as my colleague Jazmine Ulloa detailed earlier this year, the phrase has also lately been used ominously, especially by conservative politicians, to describe a certain way of life in danger of being stolen by outsiders.
  • The typical counterargument, both in politics and pop culture, has been that immigrants pursuing their ambitions help to strengthen all of America
  • recent stories have complicated this idea by questioning whether the dream itself — or, at least, defining that dream in mostly material terms — can be toxic.
  • ...1 more annotation...
  • This is the danger of the American dream when you scale it down from the national to the individual level. You risk devoting your life to wanting something because it’s what you’ve been told you should want. Everybody loves a Cinderella story, but sometimes your dream, in reality, is just a wish somebody else’s heart made.
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than the real thing? | Counselling and therapy | The Guardian - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
« First ‹ Previous 261 - 270 of 270
Showing 20 items per page