Skip to main content

Home/ TOK Friends/ Group items tagged digital media

Rss Feed Group items tagged

Javier E

Opinion | Privacy Is Too Big to Understand - The New York Times - 1 views

  • There is “no single rhetorical approach likely to work on a given audience and none too dangerous to try. Any story that sticks is a good one,”
  • This newsletter is about finding ways to make this stuff stick in your mind and to arm you with the information you need to take control of your digital life.
  • how to start? The definition of privacy itself. I think it’s time to radically expand it.
  • ...12 more annotations...
  • “Privacy” is an impoverished word — far too small a word to describe what we talk about when we talk about the mining, transmission, storing, buying, selling, use and misuse of our personal information.
  • “hyperobjects,” a concept so all-encompassing that it is impossible to adequately describe
  • invite skepticism because their scale is so vast and sometimes abstract.
  • When technology governs so many aspects of our lives — and when that technology is powered by the exploitation of our data — privacy isn’t just about knowing your secrets, it’s about autonomy
  • “Privacy is really about being able to define for ourselves who we are for the world and on our own terms,”
  • not a choice that belongs to an algorithm or data brokerEntities that collect, aggregate and sell individuals’ personal data, derivatives and inferences from disparate public and private sources. Glossary and definitely not to Facebook.”
  • privacy is about how that data is used to take away our control
  • real-time data, once assumed to be protected by phone companies, was available for sale to bounty hunters for a $300 fee
  • ICE officials partnered with a private data firm to track license plate data.
  • It means reckoning with private surveillance databases armed with dossiers on regular citizens and outsourced to the highest bidder
  • “Years ago we worried about the N.S.A. building huge server farms, but now it’s much cheaper to go to a private-service vendor and outsource this to a company who can cloak their activity in trade secrets,
  • “It’s comparable to asking people to stop using air conditioning because of the ozone layer. It’s not likely to happen because the immediate comfort is more valuable than the long-term fear.
knudsenlu

You Are Already Living Inside a Computer - The Atlantic - 1 views

  • Nobody really needs smartphone-operated bike locks or propane tanks. And they certainly don’t need gadgets that are less trustworthy than the “dumb” ones they replace, a sin many smart devices commit. But people do seem to want them—and in increasing numbers.
  • Why? One answer is that consumers buy what is on offer, and manufacturers are eager to turn their dumb devices smart. Doing so allows them more revenue, more control, and more opportunity for planned obsolescence. It also creates a secondary market for data collected by means of these devices. Roomba, for example, hopes to deduce floor plans from the movement of its robotic home vacuums so that it can sell them as business intelligence.
  • And the more people love using computers for everything, the more life feels incomplete unless it takes place inside them.
  • ...15 more annotations...
  • Computers already are predominant, human life already takes place mostly within them, and people are satisfied with the results.
  • These devices pose numerous problems. Cost is one. Like a cheap propane gauge, a traditional bike lock is a commodity. It can be had for $10 to $15, a tenth of the price of Nokē’s connected version. Security and privacy are others. The CIA was rumored to have a back door into Samsung TVs for spying. Disturbed people have been caught speaking to children over hacked baby monitors. A botnet commandeered thousands of poorly secured internet-of-things devices to launch a massive distributed denial-of-service attack against the domain-name syste
  • Reliability plagues internet-connected gadgets, too. When the network is down, or the app’s service isn’t reachable, or some other software behavior gets in the way, the products often cease to function properly—or at all.
  • Turing guessed that machines would become most compelling when they became convincing companions, which is essentially what today’s smartphones (and smart toasters) do.
  • But Turing never claimed that machines could think, let alone that they might equal the human mind. Rather, he surmised that machines might be able to exhibit convincing behavior.
  • People choose computers as intermediaries for the sensual delight of using computers
  • ne such affection is the pleasure of connectivity. You don’t want to be offline. Why would you want your toaster or doorbell to suffer the same fate? Today, computational absorption is an ideal. The ultimate dream is to be online all the time, or at least connected to a computational machine of some kind.
  • Doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers.
  • “Being a computer” means something different today than in 1950, when Turing proposed the imitation game. Contra the technical prerequisites of artificial intelligence, acting like a computer often involves little more than moving bits of data around, or acting as a controller or actuator. Grill as computer, bike lock as computer, television as computer. An intermediary
  • Or consider doorbells once more. Forget Ring, the doorbell has already retired in favor of the computer. When my kids’ friends visit, they just text a request to come open the door. The doorbell has become computerized without even being connected to an app or to the internet. Call it “disruption” if you must, but doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers, where they can produce new affections.
  • The present status of intelligent machines is more powerful than any future robot apocalypse.
  • Why would anyone ever choose a solution that doesn’t involve computers, when computers are available? Propane tanks and bike locks are still edge cases, but ordinary digital services work similarly: The services people seek out are the ones that allow them to use computers to do things—from finding information to hailing a cab to ordering takeout. This is a feat of aesthetics as much as it is one of business. People choose computers as intermediaries for the sensual delight of using computers, not just as practical, efficient means for solving problems.
  • This is not where anyone thought computing would end up. Early dystopic scenarios cautioned that the computer could become a bureaucrat or a fascist, reducing human behavior to the predetermined capacities of a dumb machine. Or else, that obsessive computer use would be deadening, sucking humans into narcotic detachment.Those fears persist to some extent, partly because they have been somewhat realized. But they have also been inverted. Being away from them now feels deadening, rather than being attached to them without end. And thus, the actions computers take become self-referential: to turn more and more things into computers to prolong that connection.
  • But the real present status of intelligent machines is both humdrum and more powerful than any future robot apocalypse. Turing is often called the father of AI, but he only implied that machines might become compelling enough to inspire interaction. That hardly counts as intelligence, artificial or real. It’s also far easier to achieve. Computers already have persuaded people to move their lives inside of them. The machines didn’t need to make people immortal, or promise to serve their every whim, or to threaten to destroy them absent assent. They just needed to become a sufficient part of everything human beings do such that they can’t—or won’t—imagine doing those things without them.
  • . The real threat of computers isn’t that they might overtake and destroy humanity with their future power and intelligence. It’s that they might remain just as ordinary and impotent as they are today, and yet overtake us anyway.
knudsenlu

Facebook, el monstruo de las dos cabezas - Archivo Digital de Noticias de Colombia y el... - 0 views

  • Es así como, casi sin darnos cuenta, comenzamos a comportarnos como productos, siempre buscando tener más compradores, en función de la identidad más "vendedora".
  • Aunque tiene el potencial para convertirse en el Gran Hermano, la diferencia con la versión de George Orwell es que en ella las cámaras y micrófonos eran instaladas en contra de la voluntad de sus habitantes, mientras que
  • "Trino, luego existo", podría ser el eslogan del hombre moderno, para quien su valor se mide a menudo en el número de amigos virtuales que logra acumular, en cuantos 'Me Gusta' obtiene por sus comentarios, imágenes y publicaciones en red.
  • ...1 more annotation...
  • En código de programación, nuestra vida cotidiana queda registrada sin que nos demos cuenta.
  •  
    Sorry- I know it's in Spanish.. it just seemed fitting for TOK
Javier E

Don't Be Surprised About Facebook and Teen Girls. That's What Facebook Is. | Talking Po... - 0 views

  • First, set aside all morality. Let’s say we have a 16 year old girl who’s been doing searches about average weights, whether boys care if a girl is overweight and maybe some diets. She’s also spent some time on a site called AmIFat.com. Now I set you this task. You’re on the other side of the Facebook screen and I want you to get her to click on as many things as possible and spend as much time clicking or reading as possible. Are you going to show her movie reviews? Funny cat videos? Homework tips? Of course, not.
  • If you’re really trying to grab her attention you’re going to show her content about really thin girls, how their thinness has gotten them the attention of boys who turn out to really love them, and more diets
  • We both know what you’d do if you were operating within the goals and structure of the experiment.
  • ...17 more annotations...
  • This is what artificial intelligence and machine learning are. Facebook is a series of algorithms and goals aimed at maximizing engagement with Facebook. That’s why it’s worth hundreds of billions of dollars. It has a vast army of computer scientists and programmers whose job it is to make that machine more efficient.
  • the Facebook engine is designed to scope you out, take a psychographic profile of who you are and then use its data compiled from literally billions of humans to serve you content designed to maximize your engagement with Facebook.
  • Put in those terms, you barely have a chance.
  • Of course, Facebook can come in and say, this is damaging so we’re going to add some code that says don’t show this dieting/fat-shaming content but girls 18 and under. But the algorithms will find other vulnerabilities
  • So what to do? The decision of all the companies, if not all individuals, was just to lie. What else are you going to do? Say we’re closing down our multi-billion dollar company because our product shouldn’t exist?
  • why exactly are you creating a separate group of subroutines that yanks Facebook back when it does what it’s supposed to do particularly well? This, indeed, was how the internal dialog at Facebook developed, as described in the article I read. Basically, other executives said: Our business is engagement, why are we suggesting people log off for a while when they get particularly engaged?
  • what it makes me think about more is the conversations at Tobacco companies 40 or 50 years ago. At a certain point you realize: our product is bad. If used as intended it causes lung cancer, heart disease and various other ailments in a high proportion of the people who use the product. And our business model is based on the fact that the product is chemically addictive. Our product is getting people addicted to tobacco so that they no longer really have a choice over whether to buy it. And then a high proportion of them will die because we’ve succeeded.
  • . The algorithms can be taught to find and address an infinite numbers of behaviors. But really you’re asking the researchers and programmers to create an alternative set of instructions where Instagram (or Facebook, same difference) jumps in and does exactly the opposite of its core mission, which is to drive engagement
  • You can add filters and claim you’re not marketing to kids. But really you’re only ramping back the vast social harm marginally at best. That’s the product. It is what it is.
  • there is definitely an analogy inasmuch as what you’re talking about here aren’t some glitches in the Facebook system. These aren’t some weird unintended consequences that can be ironed out of the product. It’s also in most cases not bad actors within Facebook. It’s what the product is. The product is getting attention and engagement against which advertising is sold
  • How good is the machine learning? Well, trial and error with between 3 and 4 billion humans makes you pretty damn good. That’s the product. It is inherently destructive, though of course the bad outcomes aren’t distributed evenly throughout the human population.
  • The business model is to refine this engagement engine, getting more attention and engagement and selling ads against the engagement. Facebook gets that revenue and the digital roadkill created by the product gets absorbed by the society at large
  • Facebook is like a spectacularly profitable nuclear energy company which is so profitable because it doesn’t build any of the big safety domes and dumps all the radioactive waste into the local river.
  • in the various articles describing internal conversations at Facebook, the shrewder executives and researchers seem to get this. For the company if not every individual they seem to be following the tobacco companies’ lead.
  • Ed. Note: TPM Reader AS wrote in to say I was conflating Facebook and Instagram and sometimes referring to one or the other in a confusing way. This is a fair
  • I spoke of them as the same intentionally. In part I’m talking about Facebook’s corporate ownership. Both sites are owned and run by the same parent corporation and as we saw during yesterday’s outage they are deeply hardwired into each other.
  • the main reason I spoke of them in one breath is that they are fundamentally the same. AS points out that the issues with Instagram are distinct because Facebook has a much older demographic and Facebook is a predominantly visual medium. (Indeed, that’s why Facebook corporate is under such pressure to use Instagram to drive teen and young adult engagement.) But they are fundamentally the same: AI and machine learning to drive engagement. Same same. Just different permutations of the same dynamic.
Javier E

Understanding the Social Networks | Talking Points Memo - 0 views

  • Even when people understand in some sense – and often even in detail – how the algorithms work they still tend to see these platforms as modern, digital versions of the town square. There have always been people saying nonsensical things, lying, unknowingly peddling inaccurate information. And our whole civic order is based on a deep skepticism about any authority’s ability to determine what’s true or accurate and what’s not. So really there’s nothing new under the sun, many people say.
  • But all of these points become moot when the networks – the virtual pubic square – are actually run by a series of computer programs designed to maximize ‘engagement’ and strong emotion for the purposes of selling advertising.
  • But really all these networks are running experiments that put us collectively into the role of Pavlov’s dogs.
  • ...6 more annotations...
  • The algorithms are showing you things to see what you react to and showing you more of the things that prompt an emotional response, that make it harder to leave Facebook or Instagram or any of the other social networks.
  • really if your goal is to maximize engagement that is of course what you’d do since anger is a far more compelling and powerful emotion than appreciation.
  • Facebook didn’t do that. That’s coded into our neurology. Facebook really is an extremism generating machine. It’s really an inevitable part of the core engine.
  • it’s not just Facebook. Or perhaps you could say it’s not even Facebook at all. It’s the mix of machine learning and the business models of all the social networks
  • They have real upsides. They connect us with people. Show us fun videos. But they are also inherently destructive. And somehow we have to take cognizance of that – and not just as a matter of the business decisions of one company.
  • the social networks – meaning the mix of machine learning and advertising/engagement based business models – are really something new under the sun. They’re addiction and extremism generating systems. It’s what they’re designed to do.
Javier E

Pandemic-Era Politics Are Ruining Public Education - The Atlantic - 0 views

  • You’re also the nonvoting, perhaps unwitting, subject of adults’ latest pedagogical experiments: either relentless test prep or test abolition; quasi-religious instruction in identity-based virtue and sin; a flood of state laws to keep various books out of your hands and ideas out of your head.
  • Your parents, looking over your shoulder at your education and not liking what they see, have started showing up at school-board meetings in a mortifying state of rage. If you live in Virginia, your governor has set up a hotline where they can rat out your teachers to the government. If you live in Florida, your governor wants your parents to sue your school if it ever makes you feel “discomfort” about who you are
  • Adults keep telling you the pandemic will never end, your education is being destroyed by ideologues, digital technology is poisoning your soul, democracy is collapsing, and the planet is dying—but they’re counting on you to fix everything when you grow up.
  • ...37 more annotations...
  • It isn’t clear how the American public-school system will survive the COVID years. Teachers, whose relative pay and status have been in decline for decades, are fleeing the field. In 2021, buckling under the stresses of the pandemic, nearly 1 million people quit jobs in public education, a 40 percent increase over the previous year.
  • These kids, and the investments that come with them, may never return—the beginning of a cycle of attrition that could continue long after the pandemic ends and leave public schools even more underfunded and dilapidated than before. “It’s an open question whether the public-school system will recover,” Steiner said. “That is a real concern for democratic education.”
  • The high-profile failings of public schools during the pandemic have become a political problem for Democrats, because of their association with unions, prolonged closures, and the pedagogy of social justice, which can become a form of indoctrination.
  • The party that stands for strong government services in the name of egalitarian principles supported the closing of schools far longer than either the science or the welfare of children justified, and it has been woefully slow to acknowledge how much this damaged the life chances of some of America’s most disadvantaged students.
  • Public education is too important to be left to politicians and ideologues. Public schools still serve about 90 percent of children across red and blue America.
  • Since the common-school movement in the early 19th century, the public school has had an exalted purpose in this country. It’s our core civic institution—not just because, ideally, it brings children of all backgrounds together in a classroom, but because it prepares them for the demands and privileges of democratic citizenship. Or at least, it needs to.
  • What is school for? This is the kind of foundational question that arises when a crisis shakes the public’s faith in an essential institution. “The original thinkers about public education were concerned almost to a point of paranoia about creating self-governing citizens,”
  • “Horace Mann went to his grave having never once uttered the phrase college- and career-ready. We’ve become more accustomed to thinking about the private ends of education. We’ve completely lost the habit of thinking about education as citizen-making.”
  • School can’t just be an economic sorting system. One reason we have a stake in the education of other people’s children is that they will grow up to be citizens.
  • Public education is meant not to mirror the unexamined values of a particular family or community, but to expose children to ways that other people, some of them long dead, think.
  • If the answer were simply to push more and more kids into college, the United States would be entering its democratic prime
  • So the question isn’t just how much education, but what kind. Is it quaint, or utopian, to talk about teaching our children to be capable of governing themselves?
  • The COVID era, with Donald Trump out of office but still in power and with battles over mask mandates and critical race theory convulsing Twitter and school-board meetings, shows how badly Americans are able to think about our collective problems—let alone read, listen, empathize, debate, reconsider, and persuade in the search for solutions.
  • democratic citizenship can, at least in part, be learned.
  • The history warriors build their metaphysics of national good or evil on a foundation of ignorance. In a 2019 survey, only 40 percent of Americans were able to pass the test that all applicants for U.S. citizenship must take, which asks questions like “Who did the United States fight in World War II?” and “We elect a President for how many years?” The only state in which a majority passed was Vermont.
  • he orthodoxies currently fighting for our children’s souls turn the teaching of U.S. history into a static and morally simple quest for some American essence. They proceed from celebration or indictment toward a final judgment—innocent or guilty—and bury either oppression or progress in a subordinate clause. The most depressing thing about this gloomy pedagogy of ideologies in service to fragile psyches is how much knowledge it takes away from students who already have so little
  • A central goal for history, social-studies, and civics instruction should be to give students something more solid than spoon-fed maxims—to help them engage with the past on its own terms, not use it as a weapon in the latest front of the culture wars.
  • Releasing them to do “research” in the vast ocean of the internet without maps and compasses, as often happens, guarantees that they will drown before they arrive anywhere.
  • The truth requires a grounding in historical facts, but facts are quickly forgotten without meaning and context
  • The goal isn’t just to teach students the origins of the Civil War, but to give them the ability to read closely, think critically, evaluate sources, corroborate accounts, and back up their claims with evidence from original documents.
  • This kind of instruction, which requires teachers to distinguish between exposure and indoctrination, isn’t easy; it asks them to be more sophisticated professionals than their shabby conditions and pay (median salary: $62,000, less than accountants and transit police) suggest we are willing to support.
  • To do that, we’ll need to help kids restore at least part of their crushed attention spans.
  • staring at a screen for hours is a heavy depressant, especially for teenagers.
  • we’ll look back on the amount of time we let our children spend online with the same horror that we now feel about earlier generations of adults who hooked their kids on smoking.
  • “It’s not a choice between tech or no tech,” Bill Tally, a researcher with the Education Development Center, told me. “The question is what tech infrastructure best enables the things we care about,” such as deep engagement with instructional materials, teachers, and other students.
  • The pandemic should have forced us to reassess what really matters in public school; instead, it’s a crisis that we’ve just about wasted.
  • Like learning to read as historians, learning to sift through the tidal flood of memes for useful, reliable information can emancipate children who have been heedlessly hooked on screens by the adults in their lives
  • Finally, let’s give children a chance to read books—good books. It’s a strange feature of all the recent pedagogical innovations that they’ve resulted in the gradual disappearance of literature from many classrooms.
  • The best way to interest young people in literature is to have them read good literature, and not just books that focus with grim piety on the contemporary social and psychological problems of teenagers.
  • We sell them insultingly short in thinking that they won’t read unless the subject is themselves. Mirrors are ultimately isolating; young readers also need windows, even if the view is unfamiliar, even if it’s disturbing
  • connection through language to universal human experience and thought is the reward of great literature, a source of empathy and wisdom.
  • The culture wars, with their atmosphere of resentment, fear, and petty faultfinding, are hostile to the writing and reading of literature.
  • W. E. B. Du Bois wrote: “Nations reel and stagger on their way; they make hideous mistakes; they commit frightful wrongs; they do great and beautiful things. And shall we not best guide humanity by telling the truth about all this, so far as the truth is ascertainable?”
  • The classroom has become a half-abandoned battlefield, where grown-ups who claim to be protecting students from the virus, from books, from ideologies and counter-ideologies end up using children to protect themselves and their own entrenched camps.
  • American democracy can’t afford another generation of adults who don’t know how to talk and listen and think. We owe our COVID-scarred children the means to free themselves from the failures of the past and the present.
  • Students are leaving as well. Since 2020, nearly 1.5 million children have been removed from public schools to attend private or charter schools or be homeschooled.
  • “COVID has encouraged poor parents to question the quality of public education. We are seeing diminished numbers of children in our public schools, particularly our urban public schools.” In New York, more than 80,000 children have disappeared from city schools; in Los Angeles, more than 26,000; in Chicago, more than 24,000.
peterconnelly

They Did Their Own 'Research.' Now What? - The New York Times - 0 views

  • the crash of two linked cryptocurrencies caused tens of billions of dollars in value to evaporate from digital wallets around the world.
  • People who thought they knew what they were getting into had, in the space of 24 hours, lost nearly everything. Messages of desperation flooded a Reddit forum for traders of one of the currencies, a coin called Luna, prompting moderators to share phone numbers for international crisis hotlines.
  • “DYOR” is shorthand for “do your own research,”
  • ...8 more annotations...
  • a reminder to stay informed and vigilant against groupthink.
  • A common refrain in battles about Covid-19 and vaccination, politics and conspiracy theories, parenting, drugs, food, stock trading and media, it signals not just a rejection of authority but often trust in another kind.
  • “Do your own research” is an idea central to Joe Rogan’s interview podcast, the most listened to program on Spotify, where external claims of expertise are synonymous with admissions of malice. In its current usage, DYOR is often an appeal to join in, rendered in the language of opting out.
  • “There’s this idea that the goal of science is consensus,” Professor Carrion said. “The model they brought to it was that we didn’t need consensus.” She noted that the women she surveyed often used singular rather than plural pronouns. “It was ‘she needs to do her own research,” Professor Carrion said, rather than we need to do ours. Unlike some critical health movements in the past, this was an individualist endeavor.
  • One of the enticing aspects of cryptocurrencies, which pose an alternative to traditional financial institutions, is that expertise is available to anyone who wants to claim it.
  • In crypto, the uses of DYOR are various and contradictory, earnest and ironic sometimes within the same discussion. Breathless investment pitches for new coins are punctuated with “NFA/DYOR” (not financial advice), or admonitions not to invest more than you can afford to lose, which many people are obviously ignoring; stories about getting rich are prefaced with DYOR; requests for advice about which coins to hold are answered with DYOR. It is the siren song of crypto investing.
  • In that way — the momentum of a group — crypto investing isn’t altogether distinct from how people have invested in the stock market for decades. Though here it is tinged with a rebellious, anti-authoritarian streak: We’re outsiders, in this together; we’re doing something sort of ridiculous, but also sort of cool.
  • “Now it seems like DYOR can only do so much,” the user wrote. Eventually, the user said, you end up relying on “trust.”
Javier E

Reality Is Broken. We Have AI Photos to Blame. - WSJ - 0 views

  • AI headshots aren’t yet perfect, but they’re so close I expect we’ll start seeing them on LinkedIn, Tinder and other social profiles. Heck, we may already see them. How would we know?
  • Welcome to our new reality, where nothing is real. We now have photos initially captured with cameras that AI changes into something that never was
  • Or, like the headshot above, there are convincingly photographic images AI generates out of thin air.
  • ...11 more annotations...
  • Adobe ADBE 7.19%increase; green up pointing triangle, maker of the Photoshop, released a new tool in Firefly, its generative-AI image suite, that lets you change and add in parts of a photo with AI imagery. Earlier this month, Google showed off a new Magic Editor, initially for Pixel phones, that allows you to easily manipulate a scene. And people are all over TikTok posting the results of AI headshot services like Try It On.
  • After testing a mix of AI editing and generating tools, I just have one question for all of you armchair philosophers: What even is a photo anymore?
  • I have always wondered what I’d look like as a naval officer. Now I don’t have to. I snapped a selfie and uploaded it to Adobe Firefly’s generative-fill tool. One click of the Background button and my cluttered office was wiped out. I typed “American flag” and in it went. Then I selected the Add tool, erased my torso and typed in “naval uniform.” Boom! Adobe even found me worthy of numerous awards and decorations.
  • Astronaut, fighter pilot, pediatrician. I turned myself into all of them in under a minute each. The AI-generated images did have noticeable issues: The uniforms were strange and had odd lettering, the stethoscope seemed to be cut in half and the backgrounds were warped and blurry. Yet the final images are fun, and the quality will only get better. 
  • In FaceApp, for iOS and Android, I was able to change my frown to a smile—with the right amount of teeth! I was also able to add glasses and change my hair color. Some said it looked completely real, others who know me well figured something was up. “Your teeth look too perfect.”
  • The real reality-bending happens in Midjourney, which can turn text prompts into hyper-realistic images and blend existing images in new ways. The image quality of generated images exceeds OpenAI’s Dall-E and Adobe’s Firefly.
  • it’s more complicated to use, since it runs through the chat app Discord. Sign up for service, access the Midjourney bot through your Discord account (via web or app), then start typing in prompts. My video producer Kenny Wassus started working with a more advanced Midjourney plugin called Insight Face Swap-Bot, which allows you to sub in a face to a scene you’ve already made. He’s become a master—making me a Game of Thrones warrior and a Star Wars rebel, among other things.
  • We’re headed for a time when we won’t be able to tell how manipulated a photo is, what parts are real or fake.
  • when influential messages are conveyed through images—be they news or misinformation—people have reason to know a photo’s origin and what’s been done to it.
  • Firefly adds a “content credential,” digital information baked into the file, that says the image was manipulated with AI. Adobe is pushing to get news, tech and social-media platforms to use this open-source standard so we can all understand where the images we see came from.
  • So, yeah, our ability to spot true photos might depend on the cooperation of the entire internet. And by “true photo,” I mean one that captures a real moment—where you’re wearing your own boring clothes and your hair is just so-so, but you have the exact right number of teeth in your head.
Javier E

Why Didn't the Government Stop the Crypto Scam? - 0 views

  • By 1935, the New Dealers had set up a new agency, the Securities and Exchange Commission, and cleaned out the FTC. Yet there was still immense concern that Roosevelt had not been able to tame Wall Street. The Supreme Court didn’t really ratify the SEC as a constitutional body until 1938, and nearly struck it down in 1935 when a conservative Supreme Court made it harder for the SEC to investigate cases.
  • It took a few years, but New Dealers finally implemented a workable set of securities rules, with the courts agreeing on basic definitions of what was a security. By the 1950s, SEC investigators could raise an eyebrow and change market behavior, and the amount of cheating in finance had dropped dramatically.
  • Institutional change, in other words, takes time.
  • ...22 more annotations...
  • It’s a lesson to remember as we watch the crypto space melt down, with ex-billionaire Sam Bankman-Fried
  • It’s not like perfidy in crypto was some hidden secret. At the top of the market, back in December 2021, I wrote a piece very explicitly saying that crypto was a set of Ponzi schemes. It went viral, and I got a huge amount of hate mail from crypto types
  • one of the more bizarre aspects of the crypto meltdown is the deep anger not just at those who perpetrated it, but at those who were trying to stop the scam from going on. For instance, here’s crypto exchange Coinbase CEO Brian Armstrong, who just a year ago was fighting regulators vehemently, blaming the cops for allowing gambling in the casino he helps run.
  • FTX.com was an offshore exchange not regulated by the SEC. The problem is that the SEC failed to create regulatory clarity here in the US, so many American investors (and 95% of trading activity) went offshore. Punishing US companies for this makes no sense.
  • many crypto ‘enthusiasts’ watching Gensler discuss regulation with his predecessor “called for their incarceration or worse.”
  • Cryptocurrencies are securities, and should fit under securities law, which would have imposed rules that would foster a de facto ban of the entire space. But since regulators had not actually treated them as securities for the last ten years, a whole new gray area of fake law had emerged
  • Almost as soon as he took office, Gensler sought to fix this situation, and treat them as securities. He began investigating important players
  • But the legal wrangling to just get the courts to treat crypto as a set of speculative instruments regulated under securities law made the law moot
  • In May of 2022, a year after Gensler began trying to do something about Terra/Luna, Kwon’s scheme blew up. In a comically-too-late-to-matter gesture, an appeals court then said that the SEC had the right to compel information from Kwon’s now-bankrupt scheme. It is absolute lunacy that well-settled law, like the ability for the SEC to investigate those in the securities business, is now being re-litigated.
  • Securities and Exchange Commission Chair Gary Gensler, who took office in April of 2021 with a deep background in Wall Street, regulatory policy, and crypto, which he had taught at MIT years before joining the SEC. Gensler came in with the goal of implementing the rule of law in the crypto space, which he knew was full of scams and based on unproven technology. Yesterday, on CNBC, he was again confronted with Andrew Ross Sorkin essentially asking, “Why were you going after minor players when this Ponzi scheme was so flagrant?”
  • it wasn’t just the courts who were an impediment. Gensler wasn’t the only cop on the beat. Other regulators, like those at the Commodities Futures Trading Commission, the Federal Reserve, or the Office of Comptroller of the Currency, not only refused to take action, but actively defended their regulatory turf against an attempt from the SEC to stop the scams.
  • Behind this was the fist of political power. Everyone saw the incentives the Senate laid down when every single Republican, plus a smattering of Democrats, defeated the nomination of crypto-skeptic Saule Omarova in becoming the powerful bank regulator at the Comptroller of the Currency
  • Instead of strong figures like Omarova, we had a weakling acting Comptroller Michael Hsu at the OCC, put there by the excessively cautious Treasury Secretary Janet Yellen. Hsu refused to stop bank interactions with crypto or fintech because, as he told Congress in 2021, “These trends cannot be stopped.”
  • It’s not just these regulators; everyone wanted a piece of the bureaucratic pie. In March of 2022, before it all unraveled, the Biden administration issued an executive order on crypto. In it, Biden said that virtually every single government agency would have a hand in the space.
  • That’s… insane. If everyone’s in charge, no one is.
  • And behind all of these fights was the money and political prestige of some most powerful people in Silicon Valley, who were funding a large political fight to write the rules for crypto, with everyone from former Treasury Secretary Larry Summers to former SEC Chair Mary Jo White on the payroll.
  • (Even now, even after it was all revealed as a Ponzi scheme, Congress is still trying to write rules favorable to the industry. It’s like, guys, stop it. There’s no more bribe money!)
  • Moreover, the institution Gensler took over was deeply weakened. Since the Reagan administration, wave after wave of political leader at the SEC has gutted the place and dumbed down the enforcers. Courts have tied up the commission in knots, and Congress has defanged it
  • Under Trump crypto exploded, because his SEC chair Jay Clayton had no real policy on crypto (and then immediately went into the industry after leaving.) The SEC was so dormant that when Gensler came into office, some senior lawyers actually revolted over his attempt to make them do work.
  • In other words, the regulators were tied up in the courts, they were against an immensely powerful set of venture capitalists who have poured money into Congress and D.C., they had feeble legal levers, and they had to deal with ‘crypto enthusiasts' who thought they should be jailed or harmed for trying to impose basic rules around market manipulation.
  • The bottom line is, Gensler is just one regulator, up against a lot of massed power, money, and bad institutional habits. And we as a society simply made the choice through our elected leaders to have little meaningful law enforcement in financial markets, which first became blindingly obvious in 2008 during the financial crisis, and then became comical ten years later when a sector whose only real use cases were money laundering
  • , Ponzi scheming or buying drugs on the internet, managed to rack up enough political power to bring Tony Blair and Bill Clinton to a conference held in a tax haven billed as ‘the future.’
Javier E

'The Godfather of AI' Quits Google and Warns of Danger Ahead - The New York Times - 0 views

  • he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
  • Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
  • ...24 more annotations...
  • Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
  • But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
  • “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
  • After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.
  • Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
  • Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job
  • Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
  • Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
  • In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
  • In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
  • Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
  • Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
  • “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
  • As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
  • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
  • His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
  • He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
  • Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their ow
  • And he fears a day when truly autonomous weapons — those killer robots — become reality.
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
  • Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
  • But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
  • He does not say that anymore.
Javier E

His Job Was to Make Instagram Safe for Teens. His 14-Year-Old Showed Him What the App W... - 0 views

  • The experience of young users on Meta’s Instagram—where Bejar had spent the previous two years working as a consultant—was especially acute. In a subsequent email to Instagram head Adam Mosseri, one statistic stood out: One in eight users under the age of 16 said they had experienced unwanted sexual advances on the platform over the previous seven days.
  • For Bejar, that finding was hardly a surprise. His daughter and her friends had been receiving unsolicited penis pictures and other forms of harassment on the platform since the age of 14, he wrote, and Meta’s systems generally ignored their reports—or responded by saying that the harassment didn’t violate platform rules.
  • “I asked her why boys keep doing that,” Bejar wrote to Zuckerberg and his top lieutenants. “She said if the only thing that happens is they get blocked, why wouldn’t they?”
  • ...39 more annotations...
  • For the well-being of its users, Bejar argued, Meta needed to change course, focusing less on a flawed system of rules-based policing and more on addressing such bad experiences
  • The company would need to collect data on what upset users and then work to combat the source of it, nudging those who made others uncomfortable to improve their behavior and isolating communities of users who deliberately sought to harm others.
  • “I am appealing to you because I believe that working this way will require a culture shift,” Bejar wrote to Zuckerberg—the company would have to acknowledge that its existing approach to governing Facebook and Instagram wasn’t working.
  • During and after Bejar’s time as a consultant, Meta spokesman Andy Stone said, the company has rolled out several product features meant to address some of the Well-Being Team’s findings. Those features include warnings to users before they post comments that Meta’s automated systems flag as potentially offensive, and reminders to be kind when sending direct messages to users like content creators who receive a large volume of messages. 
  • Meta’s classifiers were reliable enough to remove only a low single-digit percentage of hate speech with any degree of precision.
  • Bejar was floored—all the more so when he learned that virtually all of his daughter’s friends had been subjected to similar harassment. “DTF?” a user they’d never met would ask, using shorthand for a vulgar proposition. Instagram acted so rarely on reports of such behavior that the girls no longer bothered reporting them. 
  • Meta’s own statistics suggested that big problems didn’t exist. 
  • Meta had come to approach governing user behavior as an overwhelmingly automated process. Engineers would compile data sets of unacceptable content—things like terrorism, pornography, bullying or “excessive gore”—and then train machine-learning models to screen future content for similar material.
  • While users could still flag things that upset them, Meta shifted resources away from reviewing them. To discourage users from filing reports, internal documents from 2019 show, Meta added steps to the reporting process. Meta said the changes were meant to discourage frivolous reports and educate users about platform rules. 
  • The outperformance of Meta’s automated enforcement relied on what Bejar considered two sleights of hand. The systems didn’t catch anywhere near the majority of banned content—only the majority of what the company ultimately removed
  • “Please don’t talk about my underage tits,” Bejar’s daughter shot back before reporting his comment to Instagram. A few days later, the platform got back to her: The insult didn’t violate its community guidelines.
  • Also buttressing Meta’s statistics were rules written narrowly enough to ban only unambiguously vile material. Meta’s rules didn’t clearly prohibit adults from flooding the comments section on a teenager’s posts with kiss emojis or posting pictures of kids in their underwear, inviting their followers to “see more” in a private Facebook Messenger group. 
  • “Mark personally values freedom of expression first and foremost and would say this is a feature and not a bug,” Rosen responded
  • Narrow rules and unreliable automated enforcement systems left a lot of room for bad behavior—but they made the company’s child-safety statistics look pretty good according to Meta’s metric of choice: prevalence.
  • Defined as the percentage of content viewed worldwide that explicitly violates a Meta rule, prevalence was the company’s preferred measuring stick for the problems users experienced.
  • According to prevalence, child exploitation was so rare on the platform that it couldn’t be reliably estimated, less than 0.05%, the threshold for functional measurement. Content deemed to encourage self-harm, such as eating disorders, was just as minimal, and rule violations for bullying and harassment occurred in just eight of 10,000 views. 
  • “There’s a grading-your-own-homework problem,”
  • Meta defines what constitutes harmful content, so it shapes the discussion of how successful it is at dealing with it.”
  • It could reconsider its AI-generated “beauty filters,” which internal research suggested made both the people who used them and those who viewed the images more self-critical
  • the team built a new questionnaire called BEEF, short for “Bad Emotional Experience Feedback.
  • A recurring survey of issues 238,000 users had experienced over the past seven days, the effort identified problems with prevalence from the start: Users were 100 times more likely to tell Instagram they’d witnessed bullying in the last week than Meta’s bullying-prevalence statistics indicated they should.
  • “People feel like they’re having a bad experience or they don’t,” one presentation on BEEF noted. “Their perception isn’t constrained by policy.
  • they seemed particularly common among teens on Instagram.
  • Among users under the age of 16, 26% recalled having a bad experience in the last week due to witnessing hostility against someone based on their race, religion or identity
  • More than a fifth felt worse about themselves after viewing others’ posts, and 13% had experienced unwanted sexual advances in the past seven days. 
  • The vast gap between the low prevalence of content deemed problematic in the company’s own statistics and what users told the company they experienced suggested that Meta’s definitions were off, Bejar argued
  • To minimize content that teenagers told researchers made them feel bad about themselves, Instagram could cap how much beauty- and fashion-influencer content users saw.
  • Proving to Meta’s leadership that the company’s prevalence metrics were missing the point was going to require data the company didn’t have. So Bejar and a group of staffers from the Well-Being Team started collecting it
  • And it could build ways for users to report unwanted contacts, the first step to figuring out how to discourage them.
  • One experiment run in response to BEEF data showed that when users were notified that their comment or post had upset people who saw it, they often deleted it of their own accord. “Even if you don’t mandate behaviors,” said Krieger, “you can at least send signals about what behaviors aren’t welcome.”
  • But among the ranks of Meta’s senior middle management, Bejar and Krieger said, BEEF hit a wall. Managers who had made their careers on incrementally improving prevalence statistics weren’t receptive to the suggestion that the approach wasn’t working. 
  • After three decades in Silicon Valley, he understood that members of the company’s C-Suite might not appreciate a damning appraisal of the safety risks young users faced from its product—especially one citing the company’s own data. 
  • “This was the email that my entire career in tech trained me not to send,” he says. “But a part of me was still hoping they just didn’t know.”
  • “Policy enforcement is analogous to the police,” he wrote in the email Oct. 5, 2021—arguing that it’s essential to respond to crime, but that it’s not what makes a community safe. Meta had an opportunity to do right by its users and take on a problem that Bejar believed was almost certainly industrywide.
  • fter Haugen’s airing of internal research, Meta had cracked down on the distribution of anything that would, if leaked, cause further reputational damage. With executives privately asserting that the company’s research division harbored a fifth column of detractors, Meta was formalizing a raft of new rules for employees’ internal communication.
  • Among the mandates for achieving “Narrative Excellence,” as the company called it, was to keep research data tight and never assert a moral or legal duty to fix a problem.
  • “I had to write about it as a hypothetical,” Bejar said. Rather than acknowledging that Instagram’s survey data showed that teens regularly faced unwanted sexual advances, the memo merely suggested how Instagram might help teens if they faced such a problem.
  • The hope that the team’s work would continue didn’t last. The company stopped conducting the specific survey behind BEEF, then laid off most everyone who’d worked on it as part of what Zuckerberg called Meta’s “year of efficiency.
  • If Meta was to change, Bejar told the Journal, the effort would have to come from the outside. He began consulting with a coalition of state attorneys general who filed suit against the company late last month, alleging that the company had built its products to maximize engagement at the expense of young users’ physical and mental health. Bejar also got in touch with members of Congress about where he believes the company’s user-safety efforts fell short. 
criscimagnael

9 Subtle Ways Technology Is Making Humanity Worse - 0 views

  • This poor posture can lead not only to back and neck issues but psychological ones as well, including lower self-esteem and mood, decreased assertiveness and productivity, and an increased tendency to recall negative things
  • Intense device usage can exhaust your eyes and cause eye strain, according to the Mayo Clinic, and can lead to symptoms such as headaches, difficulty concentrating, and watery, dry, itchy, burning, sore, or tired eyes. Overuse can also cause blurred or double vision and increased sensitivity to light.
  • Using your devices too much before bedtime can lead to insomnia.
  • ...7 more annotations...
  • Using tech devices is addictive, and it's becoming more and more difficult to disengage with their technology.In fact, the average US adult spends more than 11 hours daily in the digital world
  • These days, we have a world of information at our fingertips via the internet. While this is useful, it does have some drawbacks. Entrepreneur Beth Haggerty said she finds that it "limits pure creative thought, at times, because we are developing habits to Google everything to quickly find an answer."
  • Technology can have a negative impact on relationships, particularly when it affects how we communicate.One of the primary issues is that misunderstandings are much more likely to occur when communicating via text or email
  • Another social skill that technology is helping to erode is young people's ability to read body language and nuance in face-to-face encounters.
  • young adults who use seven to 11 social media platforms had more than three times the risk of depression and anxiety than those who use two or fewer platforms.
  • Can you imagine doing your job without the help of technology of any kind? What about communicating? Or traveling? Or entertaining yourself?
  • Smartphone slouch. Desk slump. Text neck. Whatever you call it, the way we hold ourselves when we use devices like phones, computers, and tablets isn't healthy.
Javier E

Apple News Plus Review: Good Value, But Apple Needs to Fine Tune This | Tom's Guide - 0 views

  • For $9.99 a month, News+ gives you access to more than 300 magazines, along with news articles from The Wall Street Journal and The Los Angeles Times.
  • if you want to find a specific magazine within the News+ tab, be prepared to give that scrolling finger a workout. There's no search field in the News+ tab for typing in a magazine title, so you've got to tap on Apple's catalog and scroll until you find what you're looking for
  • You can browse by category from the home screen, which reduces the number of covers you have to sort through a little bit.
  • ...14 more annotations...
  • Below the browsing menu and list of categories, you'll find the My Magazines section, which contains the publications you're currently looking at, plus issues you've downloaded.
  • (The desktop version of News+ handles things better — there's a persistent search bar in the upper left corner of the app.)
  • To find a specific title in News+ (without scrolling anyhow), head over to the Following tab directly to the right of the News+ in the News app. On that screen, there's a search field, and you can type in publication titles to bring up content from both News+ and the free News section
  • At present, it appears the only way to make a magazine stay in My Magazines is to download it from the cloud, something you do by tapping the cloud icon next to the cover. I couldn't find any way to designate a magazine as one of my favorites from within News+, so if I want to find a new issue or revisit an old one, I'm left with Apple's clunky search feature
  • Whatever magazine I started reading in News Plus — whether it was the latest Vanity Fair or the New Republic — would pop in My Magazines  under Reading Now.
  • The most frequently used section of News+ figures to be My Magazines, though to be truly useful, it's going to need a little fine tuning.
  • Speaking of back issues, when you're within a magazine in News+, just tap the magazine's title at the top of the screen. You'll see a list of previous issues for that title, and in some cases, you'll see current headlines and articles from that publication's website
  • Select a current issue of a magazine, and you'll get a title page with a tappable table of contents. In most cases, there's no description for the article, so you'll just have to hope that the headline you're tapping on gives you a good idea of what to expect
  • From within the article, a Next button lets you skip ahead to the next story in an issue, while an Open button returns you to the table of contents.
  • Be aware that some publications, such as New Republic, simply feature PDFs of their current issues instead of formats optimized for digital devices
  • The New Yorker splits the difference, with no table of contents and PDFs of ad pages from the print magazine interspersed between scrollable articles. I
  • You have the option of signifying that you love or hate stories, which will help fine-tune News+'s recommendations, and you can add many articles to your Safari reading list
  • The lines between what's free and what's paid also seem a bit blurred, even with the separate News+ tab
  • how frequently is new content going to surface on News+? Will all back issues get the unappealing PDF treatment
Javier E

Opinion | Even the Best Smart Watch Might Be Bad for Your Brain - The New York Times - 0 views

  • one major downside to all this quantification: It can interfere with our ability to know our own bodies. Once you outsource your well-being to a device and convert it into a number, it stops being yours.
  • With my smart watch, sometimes I would wake up in the morning and check my app to see how I slept — instead of just taking a moment to notice that I was still tired
  • It’s an extension of our hustle-oriented culture, said the executive coach and performance expert Brad Stulberg, author of “The Practice of Groundedness.” “Our culture promotes the limiting belief that measurable achievement is the predominant arbiter of success, and these devices play right into that,
  • ...10 more annotations...
  • The more I used my watch to monitor my stress, the higher my stress levels rose.
  • “It’s like you’re trying to win at this game instead of living your life. Instead of learning what your body feels like, you have a number.”
  • Add a social or competitive component, as in the fitness app Strava or the community features on Peloton, and the feelings of control and empowerment that fitness can foster can morph quickly into the opposite.
  • If it feels like an addiction, that’s because it can work similarly to smartphone and other digital addictions. Dependency is what these devices are designed to foster.
  • in fact, we very much can become compulsively fixated on these wearable devices — in a way that is akin to addiction.”
  • These devices don’t just record your behavior — they influence it and keep you coming back. You become dependent on external validation.
  • you can’t quantify your way to good health. The reality is much harder.
  • I know I got fitter. But I started to feel that my health wasn’t grounded in my own body anymore, or even in my mind.
  • Exercise wasn’t helping me rebound from pressure anymore; it was adding to it.
  • Of course these watches can be useful: for health data, reminding you to move more or maybe even that emergency call if you wind up falling in the woods. Many of us make better choices when we know we’re being watched.
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
« First ‹ Previous 121 - 136 of 136
Showing 20 items per page