Skip to main content

Home/ TOK Friends/ Group items tagged algorithm

Rss Feed Group items tagged

Javier E

What Do We Lose If We Lose Twitter? - The Atlantic - 0 views

  • What do we lose if we lose Twitter?
  • At its best, Twitter can still provide that magic of discovering a niche expert or elevating a necessary, insurgent voice, but there is far more noise than signal. Plenty of those overenthusiastic voices, brilliant thinkers, and influential accounts have burned out on culture-warring, or have been harassed off the site or into lurking.
  • Twitter is, by some standards, a niche platform, far smaller than Facebook or Instagram or TikTok. The internet will evolve or mutate around a need for it. I am aware that all of us who can’t quit the site will simply move on when we have to.
  • ...15 more annotations...
  • Perhaps the best example of what Twitter offers now—and what we stand to gain or lose from its demise—is illustrated by the path charted by public-health officials, epidemiologists, doctors, and nurses over the past three years.
  • They offered guidance that a flailing government response was too slow to provide, and helped cobble together an epidemiological picture of infections and case counts. At a moment when people were terrified and looking for any information at all, Twitter seemed to offer a steady stream of knowledgeable, diligent experts.
  • But Twitter does another thing quite well, and that’s crushing users with the pressures of algorithmic rewards and all of the risks, exposure, and toxicity that come with virality
  • t imagining a world without it can feel impossible. What do our politics look like without the strange feedback loop of a Twitter-addled political press and a class of lawmakers that seems to govern more via shitposting than by legislation
  • What happens if the media lose what the writer Max Read recently described as a “way of representing reality, and locating yourself within it”? The answer is probably messy.
  • here’s the worry that, absent a distributed central nervous system like Twitter, “the collective worldview of the ‘media’ would instead be over-shaped, from the top down, by the experiences and biases of wealthy publishers, careerist editors, self-loathing journalists, and canny operators operating in relatively closed social and professional circles.”
  • many of the most hyperactive, influential twitterati (cringe) of the mid-2010s have built up large audiences and only broadcast now: They don’t read their mentions, and they rarely engage. In private conversations, some of those people have expressed a desire to see Musk torpedo the site and put a legion of posters out of their misery.
  • Many of the past decade’s most polarizing and influential figures—people such as Donald Trump and Musk himself, who captured attention, accumulated power, and fractured parts of our public consciousness—were also the ones who were thought to be “good” at using the website.
  • the effects of Twitter’s chief innovation—its character limit—on our understanding of language, nuance, and even truth.
  • “These days, it seems like we are having languages imposed on us,” he said. “The fact that you have a social media that tells you how many characters to use, this is language imposition. You have to wonder about the agenda there. Why does anyone want to restrict the full range of my language? What’s the game there?
  • in McLuhanian fashion, the constraints and the architecture change not only what messages we receive but how we choose to respond. Often that choice is to behave like the platform itself: We are quicker to respond and more aggressive than we might be elsewhere, with a mindset toward engagement and visibility
  • it’s easy to argue that we stand to gain something essential and human if we lose Twitter. But there is plenty about Twitter that is also essential and human.
  • No other tool has connected me to the world—to random bits of news, knowledge, absurdist humor, activism, and expertise, and to scores of real personal interactions—like Twitter has
  • What makes evaluating a life beyond Twitter so hard is that everything that makes the service truly special is also what makes it interminable and toxic.
  • the worst experience you can have on the platform is to “win” and go viral. Generally, it seems that the more successful a person is at using Twitter, the more they refer to it as a hellsite.
Javier E

Don't Do TikTok - by Jonathan V. Last - The Triad - 0 views

  • The small-bore concern is personal data. TikTok is basically Chinese spyware. The platform is owned by a Chinese company, Bytedance, which, like all Chinese companies, operates at the pleasure of the Chinese Communist Party.1 Anyone from Bytedance who wants to look into an American user’s TikTok data can do so. And they do it on the reg.
  • But personal data isn’t the big danger. The big danger is that TikTok decides what videos people see. Recommendations are driven entirely by the company’s black-box algorithm. And since TikTok answers to the Chinese Communist Party, then if the ChiComs tell TikTok to start pushing certain videos to certain people, that’s what TikTok will do.
  • It’s a gigantic propaganda engine. Making TikTok your platform of choice is the equivalent of using RT as your primary news source.
  • ...7 more annotations...
  • TikTok accounts run by the propaganda arm of the Chinese government have accumulated millions of followers and tens of millions of views, many of them on videos editorializing about U.S. politics without clear disclosure that they were posted by a foreign government.
  • The accounts are managed by MediaLinks TV, a registered foreign agent and Washington D.C.-based outpost of the main Chinese Communist Party television news outlet, China Central Television. The largest of them are @Pandaorama, which features cute videos about Chinese culture, @The…Optimist, which posts about sustainability, and @NewsTokss, which features coverage of U.S. national and international news.
  • In the run-up to the 2022 elections, the @NewsTokss account criticized some candidates (mostly Republicans), and favored others (mostly Democrats). A video from July began with the caption “Cruz, Abbott Don’t Care About Us”; a video from October was captioned “Rubio Has Done Absolutely Nothing.” But @NewsTokss did not target only Republicans; another October video asked viewers whether they thought President Joe Biden’s promise to sign a bill codifying abortion rights was a “political manipulation tactic.” Nothing in these videos disclosed to viewers that they were being pushed by a foreign government.
  • any Chinese play for Taiwan would be accompanied by TikTok aggressively pushing content in America designed to divide public opinion and weaken America’s commitment to Taiwan’s defense.
  • With all the official GOP machinations against gay marriage, it seems like if McConnell wanted that bill to fail, he could have pressured two Republican senators to vote against it. He said nothing. Trump said nothing. DeSantis said nothing. There was barely a whimper of protest from those who could have influenced this. Mike Lee and Ted Cruz engaged in theatrics, but no one actually used their power to stop this.
  • They let it pass because they don’t care and they want it to go away as an issue. And that goes for the MAGA GOP as well. Opposition to it in politics is all theater and will have a shelf life in riling up the base.
  • Evangelical religious convictions might be for one man + one woman marriage. But, the civil/political situation is far different from that and it’s worth recognizing where the GOP actually stands. They could have stopped this. They didn’t. That point should be clear, especially to their evangelical base who looks to the GOP to save America for them.
Javier E

An Unholy Alliance Between Ye, Musk, and Trump - The Atlantic - 0 views

  • Musk, Trump, and Ye are after something different: They are all obsessed with setting the rules of public spaces.
  • An understandable consensus began to form on the political left that large social networks, but especially Facebook, helped Trump rise to power. The reasons were multifaceted: algorithms that gave a natural advantage to the most shameless users, helpful marketing tools that the campaign made good use of, a confusing tangle of foreign interference (the efficacy of which has always been tough to suss out), and a basic attentional architecture that helps polarize and pit Americans against one another (no foreign help required).
  • The misinformation industrial complex—a loosely knit network of researchers, academics, journalists, and even government entities—coalesced around this moment. Different phases of the backlash homed in on bots, content moderation, and, after the Cambridge Analytica scandal, data privacy
  • ...15 more annotations...
  • the broad theme was clear: Social-media platforms are the main communication tools of the 21st century, and they matter.
  • With Trump at the center, the techlash morphed into a culture war with a clear partisan split. One could frame the position from the left as: We do not want these platforms to give a natural advantage to the most shameless and awful people who stoke resentment and fear to gain power
  • On the right, it might sound more like: We must preserve the power of the platforms to let outsiders have a natural advantage (by stoking fear and resentment to gain power).
  • the political world realized that platforms and content-recommendation engines decide which cultural objects get amplified. The left found this troubling, whereas the right found it to be an exciting prospect and something to leverage, exploit, and manipulate via the courts
  • Crucially, both camps resent the power of the technology platforms and believe the companies have a negative influence on our discourse and politics by either censoring too much or not doing enough to protect users and our political discourse.
  • one outcome of the techlash has been an incredibly facile public understanding of content moderation and a whole lot of culture warring.
  • Musk and Ye aren’t so much buying into the right’s overly simplistic Big Tech culture war as they are hijacking it for their own purposes; Trump, meanwhile, is mostly just mad
  • Each one casts himself as an antidote to a heavy-handed, censorious social-media apparatus that is either captured by progressive ideology or merely pressured into submission by it. But none of them has any understanding of thorny First Amendment or content-moderation issues.
  • They embrace a shallow posture of free-speech maximalism—the very kind that some social-media-platform founders first espoused, before watching their sites become overrun with harassment, spam, and other hateful garbage that drives away both users and advertisers
  • for those who can hit the mark without getting banned, social media is a force multiplier for cultural and political relevance and a way around gatekeeping media.
  • Musk, Ye, and Trump rely on their ability to pick up their phones, go direct, and say whatever they wan
  • the moment they butt up against rules or consequences, they begin to howl about persecution and unfair treatment. The idea of being treated similarly to the rest of a platform’s user base
  • is so galling to these men that they declare the entire system to be broken.
  • they also demonstrate how being the Main Character of popular and political culture can totally warp perspective. They’re so blinded by their own outlying experiences across social media that, in most cases, they hardly know what it is they’re buying
  • These are projects motivated entirely by grievance and conflict. And so they are destined to amplify grievance and conflict
Javier E

Opinion | Here's Hoping Elon Musk Destroys Twitter - The New York Times - 0 views

  • I’ve sometimes described being on Twitter as like staying too late at a bad party full of people who hate you. I now think this was too generous to Twitter. I mean, even the worst parties end.
  • Twitter is more like an existentialist parable of a party, with disembodied souls trying and failing to be properly seen, forever. It’s not surprising that the platform’s most prolific users often refer to it as “this hellsite.”
  • Among other things, he’s promised to reinstate Donald Trump, whose account was suspended after the Jan. 6 attack on the Capitol. Other far-right figures may not be far behind, along with Russian propagandists, Covid deniers and the like. Given Twitter’s outsize influence on media and politics, this will probably make American public life even more fractious and deranged.
  • ...12 more annotations...
  • The best thing it could do for society would be to implode.
  • Twitter hooks people in much the same way slot machines do, with what experts call an “intermittent reinforcement schedule.” Most of the time, it’s repetitive and uninteresting, but occasionally, at random intervals, some compelling nugget will appear. Unpredictable rewards, as the behavioral psychologist B.F. Skinner found with his research on rats and pigeons, are particularly good at generating compulsive behavior.
  • “I don’t know that Twitter engineers ever sat around and said, ‘We are creating a Skinner box,’” said Natasha Dow Schüll, a cultural anthropologist at New York University and author of a book about gambling machine design. But that, she said, is essentially what they’ve built. It’s one reason people who should know better regularly self-destruct on the site — they can’t stay away.
  • Twitter is not, obviously, the only social media platform with addictive qualities. But with its constant promise of breaking news, it feeds the hunger of people who work in journalism and politics, giving it a disproportionate, and largely negative, impact on those fields, and hence on our national life.
  • Twitter is much better at stoking tribalism than promoting progress.
  • According to a 2021 study, content expressing “out-group animosity” — negative feelings toward disfavored groups — is a major driver of social-media engagement
  • That builds on earlier research showing that on Twitter, false information, especially about politics, spreads “significantly farther, faster, deeper and more broadly than the truth.”
  • The company’s internal research has shown that Twitter’s algorithm amplifies right-wing accounts and news sources over left-wing ones.
  • This dynamic will probably intensify quite a bit if Musk takes over. Musk has said that Twitter has “a strong left bias,” and that he wants to undo permanent bans, except for spam accounts and those that explicitly call for violence. That suggests figures like Alex Jones, Steve Bannon and Marjorie Taylor Greene will be welcomed back.
  • But as one of the people who texted Musk pointed out, returning banned right-wingers to Twitter will be a “delicate game.” After all, the reason Twitter introduced stricter moderation in the first place was that its toxicity was bad for business
  • For A-list entertainers, The Washington Post reports, Twitter “is viewed as a high-risk, low-reward platform.” Plenty of non-celebrities feel the same way; I can’t count the number of interesting people who were once active on the site but aren’t anymore.
  • An influx of Trumpists is not going to improve the vibe. Twitter can’t be saved. Maybe, if we’re lucky, it can be destroyed.
Javier E

What Is Mastodon and Why Are People Leaving Twitter for It? - The New York Times - 0 views

  • Mastodon is a part of the Fediverse, or federated universe, a group of federated platforms that share communication protocols.
  • Unlike Twitter, Mastodon presents posts in chronological order, rather than based on an algorithm.
  • It also has no ads; Mastodon is largely crowdfunded
  • ...7 more annotations...
  • Most servers are funded by the people who use them.
  • The servers that Mastodon oversees — Mastodon Social and Mastodon Online — are funded through Patreon, a membership and subscription service platform often used by content creators.
  • Although Mastodon visually resembles Twitter, its user experience is more akin to that of Discord, a talking and texting app where people also join servers that have their own cultures and rules.
  • Unlike Twitter and Discord, Mastodon does not have the ability to make its users, or the people who create servers, do anything.
  • But servers can dictate how they interact with one another — or whether they interact at all in a shared stream of posts. For example, when Gab used Mastodon’s code, Mastodon Social and other independent servers blocked Gab’s server, so posts from Gab did not appear on the feeds of people using those servers.
  • Like an email account, your username includes the name of the server itself. For example, a possible username on Mastodon Social would be janedoe@mastodon.social. Regardless of which server you sign up with, you can interact with people who use other Mastodon servers, or you can switch to another one
  • Once you sign up for an account, you can post “toots,” which are Mastodon’s version of tweets. You can also boost other people’s toots, the equivalent of a retweet.
  •  
    owned
Javier E

How the Shoggoth Meme Has Come to Symbolize the State of A.I. - The New York Times - 0 views

  • the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.
  • it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.
  • Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, said it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.
  • ...17 more annotations...
  • Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.
  • In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.
  • Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.
  • @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.
  • And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
  • “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
  • when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.
  • @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”
  • In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.
  • That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)
  • If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.
  • A great many people are dismissive of suggestions that any of these systems are “really” thinking, because they’re “just” doing something banal (like making statistical predictions about the next word in a sentence). What they fail to appreciate is that there is every reason to suspect that human cognition is “just” doing those exact same things. It matters not that birds flap their wings but airliners don’t. Both fly. And these machines think. And, just as airliners fly faster and higher and farther than birds while carrying far more weight, these machines are already outthinking the majority of humans at the majority of tasks. Further, that machines aren’t perfect thinkers is about as relevant as the fact that air travel isn’t instantaneous. Now consider: we’re well past the Wright flyer level of thinking machine, past the early biplanes, somewhere about the first commercial airline level. Not quite the DC-10, I think. Can you imagine what the AI equivalent of a 777 will be like? Fasten your seatbelts.
  • @thomas h. You make my point perfectly. You’re observing that the way a plane flies — by using a turbine to generate thrust from combusting kerosene, for example — is nothing like the way that a bird flies, which is by using the energy from eating plant seeds to contract the muscles in its wings to make them flap. You are absolutely correct in that observation, but it’s also almost utterly irrelevant. And it ignores that, to a first approximation, there’s no difference in the physics you would use to describe a hawk riding a thermal and an airliner gliding (essentially) unpowered in its final descent to the runway. Further, you do yourself a grave disservice in being dismissive of the abilities of thinking machines, in exactly the same way that early skeptics have been dismissive of every new technology in all of human history. Writing would make people dumb; automobiles lacked the intelligence of horses; no computer could possibly beat a chess grandmaster because it can’t comprehend strategy; and on and on and on. Humans aren’t nearly as special as we fool ourselves into believing. If you want to have any hope of acting responsibly in the age of intelligent machines, you’ll have to accept that, like it or not, and whether or not it fits with your preconceived notions of what thinking is and how it is or should be done … machines can and do think, many of them better than you in a great many ways. b&
  • @BLA. You are incorrect. Everything has nature. Its nature is manifested in making humans react. Sure, no humans, no nature, but here we are. The writer and various sources are not attributing nature to AI so much as admitting that they don’t know what this nature might be, and there are reasons to be scared of it. More concerning to me is the idea that this field is resorting to geek culture reference points to explain and comprehend itself. It’s not so much the algorithm has no soul, but that the souls of the humans making it possible are stupendously and tragically underdeveloped.
  • When even tech companies are saying AI is moving too fast, and the articles land on page 1 of the NYT (there's an old reference), I think the greedy will not think twice about exploiting this technology, with no ethical considerations, at all.
  • @nome sane? The problem is it isn't data as we understand it. We know what the datasets are -- they were used to train the AI's. But once trained, the AI is thinking for itself, with results that have surprised everybody.
  • The unique feature of a shoggoth is it can become whatever is needed for a particular job. There's no actual shape so it's not a bad metaphor, if an imperfect image. Shoghoths also turned upon and destroyed their creators, so the cautionary metaphor is in there, too. A shame more Asimov wasn't baked into AI. But then the conflict about how to handle AI in relation to people was key to those stories, too.
Javier E

Male Stock Analysts With 'Dominant' Faces Get More Information-and Have Better Forecast... - 0 views

  • “People form impressions after extremely brief exposure to faces—within a hundred milliseconds,” says Alexander Todorov, a behavioral-science professor at the University of Chicago Booth School of Business. “They take actions based on those impressions,”
  • Analyst accuracy was determined by comparing each analyst’s prediction error—the difference between their prediction and the actual earnings—with that of all analysts for that same company and quarter.
  • Prof. Teoh and her fellow researchers analyzed the facial traits of nearly 800 U.S. sell-side stock financial analysts working between January 1990 and December 2017 who also had a LinkedIn profile photo as of 2018. They pulled their sample of analysts from Thomson Reuters and the firms they covered from the merged Center for Research in Security Prices and Compustat, a database of financial, statistical and market information
  • ...8 more annotations...
  • The researchers used facial-recognition software to map out specific points on a person’s face, then applied machine-learning algorithms to the facial points to obtain empirical measures for three key face impressions—trustworthiness, dominance and attractiveness.  
  • They examined the association of these impressions with the accuracy of analysts’ quarterly forecasts, drawn from the Institutional Brokers Estimate System
  • . Under most circumstances, such quick impressions aren’t accurate and shouldn’t be trusted, he says.
  • For an average stock valued at $100, Prof. Teoh says, analysts ranked as looking most trustworthy were 25 cents more accurate in earnings-per-share forecasts than the analysts who were ranked as looking least trustworthy
  • Similarly, most-dominant-looking analysts were 52 cents more accurate in their EPS forecast than least-dominant-looking analysts.
  • The relation between a dominant face and accuracy, meanwhile, was significant before and after the regulation was enacted, the analysts say. This suggests that dominant-looking male analysts are always able to obtain information,
  • While forecasts of female analysts regardless of facial characteristics were on average more accurate than those of their male counterparts, the forecasts of women who were seen as more-dominant-looking were significantly less accurate than their male counterparts.  
  • Says Prof. Todorov: “Women who look dominant are more likely to be viewed negatively because it goes against the cultural stereotype.
Javier E

DeepMind uncovers structure of 200m proteins in scientific leap forward | DeepMind | Th... - 0 views

  • Highlighter
  • Proteins are the building blocks of life. Formed of chains of amino acids, folded up into complex shapes, their 3D structure largely determines their function. Once you know how a protein folds up, you can start to understand how it works, and how to change its behaviour
  • Although DNA provides the instructions for making the chain of amino acids, predicting how they interact to form a 3D shape was more tricky and, until recently, scientists had only deciphered a fraction of the 200m or so proteins known to science
  • ...7 more annotations...
  • In November 2020, the AI group DeepMind announced it had developed a program called AlphaFold that could rapidly predict this information using an algorithm. Since then, it has been crunching through the genetic codes of every organism that has had its genome sequenced, and predicting the structures of the hundreds of millions of proteins they collectively contain.
  • Last year, DeepMind published the protein structures for 20 species – including nearly all 20,000 proteins expressed by humans – on an open database. Now it has finished the job, and released predicted structures for more than 200m proteins.
  • “Essentially, you can think of it as covering the entire protein universe. It includes predictive structures for plants, bacteria, animals, and many other organisms, opening up huge new opportunities for AlphaFold to have an impact on important issues, such as sustainability, food insecurity, and neglected diseases,”
  • In May, researchers led by Prof Matthew Higgins at the University of Oxford announced they had used AlphaFold’s models to help determine the structure of a key malaria parasite protein, and work out where antibodies that could block transmission of the parasite were likely to bind.
  • “Previously, we’d been using a technique called protein crystallography to work out what this molecule looks like, but because it’s quite dynamic and moves around, we just couldn’t get to grips with it,” Higgins said. “When we took the AlphaFold models and combined them with this experimental evidence, suddenly it all made sense. This insight will now be used to design improved vaccines which induce the most potent transmission-blocking antibodies.”
  • AlphaFold’s models are also being used by scientists at the University of Portsmouth’s Centre for Enzyme Innovation, to identify enzymes from the natural world that could be tweaked to digest and recycle plastics. “It took us quite a long time to go through this massive database of structures, but opened this whole array of new three-dimensional shapes we’d never seen before that could actually break down plastics,” said Prof John McGeehan, who is leading the work. “There’s a complete paradigm shift. We can really accelerate where we go from here
  • “AlphaFold protein structure predictions are already being used in a myriad of ways. I expect that this latest update will trigger an avalanche of new and exciting discoveries in the months and years ahead, and this is all thanks to the fact that the data are available openly for all to use.”
Javier E

Why it's as hard to escape an echo chamber as it is to flee a cult | Aeon Essays - 0 views

  • there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs.
  • they work in entirely different ways, and they require very different modes of intervention
  • An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.
  • ...90 more annotations...
  • start with epistemic bubbles
  • That omission might be purposeful
  • But that omission can also be entirely inadvertent. Even if we’re not actively trying to avoid disagreement, our Facebook friends tend to share our views and interests
  • An ‘echo chamber’ is a social structure from which other relevant voices have been actively discredited. Where an epistemic bubble merely omits contrary views, an echo chamber brings its members to actively distrust outsiders.
  • an echo chamber is something like a cult. A cult isolates its members by actively alienating them from any outside sources. Those outside are actively labelled as malignant and untrustworthy.
  • In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined.
  • The way to break an echo chamber is not to wave “the facts” in the faces of its members. It is to attack the echo chamber at its root and repair that broken trust.
  • Looking to others for corroboration is a basic method for checking whether one has reasoned well or badly
  • They have been in the limelight lately, most famously in Eli Pariser’s The Filter Bubble (2011) and Cass Sunstein’s #Republic: Divided Democracy in the Age of Social Media (2017).
  • The general gist: we get much of our news from Facebook feeds and similar sorts of social media. Our Facebook feed consists mostly of our friends and colleagues, the majority of whom share our own political and cultural views
  • various algorithms behind the scenes, such as those inside Google search, invisibly personalise our searches, making it more likely that we’ll see only what we want to see. These processes all impose filters on information.
  • Such filters aren’t necessarily bad. The world is overstuffed with information, and one can’t sort through it all by oneself: filters need to be outsourced.
  • That’s why we all depend on extended social networks to deliver us knowledge
  • any such informational network needs the right sort of broadness and variety to work
  • Each individual person in my network might be superbly reliable about her particular informational patch but, as an aggregate structure, my network lacks what Sanford Goldberg in his book Relying on Others (2010) calls ‘coverage-reliability’. It doesn’t deliver to me a sufficiently broad and representative coverage of all the relevant information.
  • Epistemic bubbles also threaten us with a second danger: excessive self-confidence.
  • An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission
  • Suppose that I believe that the Paleo diet is the greatest diet of all time. I assemble a Facebook group called ‘Great Health Facts!’ and fill it only with people who already believe that Paleo is the best diet. The fact that everybody in that group agrees with me about Paleo shouldn’t increase my confidence level one bit. They’re not mere copies – they actually might have reached their conclusions independently – but their agreement can be entirely explained by my method of selection.
  • Luckily, though, epistemic bubbles are easily shattered. We can pop an epistemic bubble simply by exposing its members to the information and arguments that they’ve missed.
  • echo chambers are a far more pernicious and robust phenomenon.
  • amieson and Cappella’s book is the first empirical study into how echo chambers function
  • echo chambers work by systematically alienating their members from all outside epistemic sources.
  • Their research centres on Rush Limbaugh, a wildly successful conservative firebrand in the United States, along with Fox News and related media
  • His constant attacks on the ‘mainstream media’ are attempts to discredit all other sources of knowledge. He systematically undermines the integrity of anybody who expresses any kind of contrary view.
  • outsiders are not simply mistaken – they are malicious, manipulative and actively working to destroy Limbaugh and his followers. The resulting worldview is one of deeply opposed force, an all-or-nothing war between good and evil
  • The result is a rather striking parallel to the techniques of emotional isolation typically practised in cult indoctrination
  • cult indoctrination involves new cult members being brought to distrust all non-cult members. This provides a social buffer against any attempts to extract the indoctrinated person from the cult.
  • The echo chamber doesn’t need any bad connectivity to function. Limbaugh’s followers have full access to outside sources of information
  • As Elijah Millgram argues in The Great Endarkenment (2015), modern knowledge depends on trusting long chains of experts. And no single person is in the position to check up on the reliability of every member of that chain
  • Their worldview can survive exposure to those outside voices because their belief system has prepared them for such intellectual onslaught.
  • exposure to contrary views could actually reinforce their views. Limbaugh might offer his followers a conspiracy theory: anybody who criticises him is doing it at the behest of a secret cabal of evil elites, which has already seized control of the mainstream media.
  • Perversely, exposure to outsiders with contrary views can thus increase echo-chamber members’ confidence in their insider sources, and hence their attachment to their worldview.
  • ‘evidential pre-emption’. What’s happening is a kind of intellectual judo, in which the power and enthusiasm of contrary voices are turned against those contrary voices through a carefully rigged internal structure of belief.
  • One might be tempted to think that the solution is just more intellectual autonomy. Echo chambers arise because we trust others too much, so the solution is to start thinking for ourselves.
  • that kind of radical intellectual autonomy is a pipe dream. If the philosophical study of knowledge has taught us anything in the past half-century, it is that we are irredeemably dependent on each other in almost every domain of knowledge
  • Limbaugh’s followers regularly read – but do not accept – mainstream and liberal news sources. They are isolated, not by selective exposure, but by changes in who they accept as authorities, experts and trusted sources.
  • we depend on a vastly complicated social structure of trust. We must trust each other, but, as the philosopher Annette Baier says, that trust makes us vulnerable. Echo chambers operate as a kind of social parasite on that vulnerability, taking advantage of our epistemic condition and social dependency.
  • I am quite confident that there are plenty of echo chambers on the political Left. More importantly, nothing about echo chambers restricts them to the arena of politics
  • The world of anti-vaccination is clearly an echo chamber, and it is one that crosses political lines. I’ve also encountered echo chambers on topics as broad as diet (Paleo!), exercise technique (CrossFit!), breastfeeding, some academic intellectual traditions, and many, many more
  • Here’s a basic check: does a community’s belief system actively undermine the trustworthiness of any outsiders who don’t subscribe to its central dogmas? Then it’s probably an echo chamber.
  • much of the recent analysis has lumped epistemic bubbles together with echo chambers into a single, unified phenomenon. But it is absolutely crucial to distinguish between the two.
  • Epistemic bubbles are rather ramshackle; they go up easily, and they collapse easily
  • Echo chambers are far more pernicious and far more robust. They can start to seem almost like living things. Their belief systems provide structural integrity, resilience and active responses to outside attacks
  • the two phenomena can also exist independently. And of the events we’re most worried about, it’s the echo-chamber effects that are really causing most of the trouble.
  • new data does, in fact, seem to show that people on Facebook actually do see posts from the other side, or that people often visit websites with opposite political affiliation.
  • their basis for evaluation – their background beliefs about whom to trust – are radically different. They are not irrational, but systematically misinformed about where to place their trust.
  • Many people have claimed that we have entered an era of ‘post-truth’.
  • Not only do some political figures seem to speak with a blatant disregard for the facts, but their supporters seem utterly unswayed by evidence. It seems, to some, that truth no longer matters.
  • This is an explanation in terms of total irrationality. To accept it, you must believe that a great number of people have lost all interest in evidence or investigation, and have fallen away from the ways of reason.
  • echo chambers offers a less damning and far more modest explanation. The apparent ‘post-truth’ attitude can be explained as the result of the manipulations of trust wrought by echo chambers.
  • We don’t have to attribute a complete disinterest in facts, evidence or reason to explain the post-truth attitude. We simply have to attribute to certain communities a vastly divergent set of trusted authorities.
  • An echo chamber doesn’t destroy their members’ interest in the truth; it merely manipulates whom they trust and changes whom they accept as trustworthy sources and institutions.
  • in many ways, echo-chamber members are following reasonable and rational procedures of enquiry. They’re engaging in critical reasoning. They’re questioning, they’re evaluating sources for themselves, they’re assessing different pathways to information. They are critically examining those who claim expertise and trustworthiness, using what they already know about the world
  • none of this weighs against the existence of echo chambers. We should not dismiss the threat of echo chambers based only on evidence about connectivity and exposure.
  • Notice how different what’s going on here is from, say, Orwellian doublespeak, a deliberately ambiguous, euphemism-filled language designed to hide the intent of the speaker.
  • echo chambers don’t trade in vague, ambiguous pseudo-speech. We should expect that echo chambers would deliver crisp, clear, unambiguous claims about who is trustworthy and who is not
  • clearly articulated conspiracy theories, and crisply worded accusations of an outside world rife with untrustworthiness and corruption.
  • Once an echo chamber starts to grip a person, its mechanisms will reinforce themselves.
  • In an epistemically healthy life, the variety of our informational sources will put an upper limit to how much we’re willing to trust any single person. Everybody’s fallible; a healthy informational network tends to discover people’s mistakes and point them out. This puts an upper ceiling on how much you can trust even your most beloved leader
  • nside an echo chamber, that upper ceiling disappears.
  • Being caught in an echo chamber is not always the result of laziness or bad faith. Imagine, for instance, that somebody has been raised and educated entirely inside an echo chamber
  • when the child finally comes into contact with the larger world – say, as a teenager – the echo chamber’s worldview is firmly in place. That teenager will distrust all sources outside her echo chamber, and she will have gotten there by following normal procedures for trust and learning.
  • It certainly seems like our teenager is behaving reasonably. She could be going about her intellectual life in perfectly good faith. She might be intellectually voracious, seeking out new sources, investigating them, and evaluating them using what she already knows.
  • The worry is that she’s intellectually trapped. Her earnest attempts at intellectual investigation are led astray by her upbringing and the social structure in which she is embedded.
  • Echo chambers might function like addiction, under certain accounts. It might be irrational to become addicted, but all it takes is a momentary lapse – once you’re addicted, your internal landscape is sufficiently rearranged such that it’s rational to continue with your addiction
  • Similarly, all it takes to enter an echo chamber is a momentary lapse of intellectual vigilance. Once you’re in, the echo chamber’s belief systems function as a trap, making future acts of intellectual vigilance only reinforce the echo chamber’s worldview.
  • There is at least one possible escape route, however. Notice that the logic of the echo chamber depends on the order in which we encounter the evidence. An echo chamber can bring our teenager to discredit outside beliefs precisely because she encountered the echo chamber’s claims first. Imagine a counterpart to our teenager who was raised outside of the echo chamber and exposed to a wide range of beliefs. Our free-range counterpart would, when she encounters that same echo chamber, likely see its many flaws
  • Those caught in an echo chamber are giving far too much weight to the evidence they encounter first, just because it’s first. Rationally, they should reconsider their beliefs without that arbitrary preference. But how does one enforce such informational a-historicity?
  • The escape route is a modified version of René Descartes’s infamous method.
  • Meditations on First Philosophy (1641). He had come to realise that many of the beliefs he had acquired in his early life were false. But early beliefs lead to all sorts of other beliefs, and any early falsehoods he’d accepted had surely infected the rest of his belief system.
  • The only solution, thought Descartes, was to throw all his beliefs away and start over again from scratch.
  • He could start over, trusting nothing and no one except those things that he could be entirely certain of, and stamping out those sneaky falsehoods once and for all. Let’s call this the Cartesian epistemic reboot.
  • Notice how close Descartes’s problem is to our hapless teenager’s, and how useful the solution might be. Our teenager, like Descartes, has problematic beliefs acquired in early childhood. These beliefs have infected outwards, infesting that teenager’s whole belief system. Our teenager, too, needs to throw everything away, and start over again.
  • Let’s call the modernised version of Descartes’s methodology the social-epistemic reboot.
  • when she starts from scratch, we won’t demand that she trust only what she’s absolutely certain of, nor will we demand that she go it alone
  • For the social reboot, she can proceed, after throwing everything away, in an utterly mundane way – trusting her senses, trusting others. But she must begin afresh socially – she must reconsider all possible sources of information with a presumptively equanimous eye. She must take the posture of a cognitive newborn, open and equally trusting to all outside sources
  • we’re not asking people to change their basic methods for learning about the world. They are permitted to trust, and trust freely. But after the social reboot, that trust will not be narrowly confined and deeply conditioned by the particular people they happened to be raised by.
  • Such a profound deep-cleanse of one’s whole belief system seems to be what’s actually required to escape. Look at the many stories of people leaving cults and echo chambers
  • Take, for example, the story of Derek Black in Florida – raised by a neo-Nazi father, and groomed from childhood to be a neo-Nazi leader. Black left the movement by, basically, performing a social reboot. He completely abandoned everything he’d believed in, and spent years building a new belief system from scratch. He immersed himself broadly and open-mindedly in everything he’d missed – pop culture, Arabic literature, the mainstream media, rap – all with an overall attitude of generosity and trust.
  • It was the project of years and a major act of self-reconstruction, but those extraordinary lengths might just be what’s actually required to undo the effects of an echo-chambered upbringing.
  • we need to attack the root, the systems of discredit themselves, and restore trust in some outside voices.
  • Stories of actual escapes from echo chambers often turn on particular encounters – moments when the echo-chambered individual starts to trust somebody on the outside.
  • Black’s is case in point. By high school, he was already something of a star on neo-Nazi media, with his own radio talk-show. He went on to college, openly neo-Nazi, and was shunned by almost every other student in his community college. But then Matthew Stevenson, a Jewish fellow undergraduate, started inviting Black to Stevenson’s Shabbat dinners. In Black’s telling, Stevenson was unfailingly kind, open and generous, and slowly earned Black’s trust. This was the seed, says Black, that led to a massive intellectual upheaval – a slow-dawning realisation of the depths to which he had been misled
  • Similarly, accounts of people leaving echo-chambered homophobia rarely involve them encountering some institutionally reported fact. Rather, they tend to revolve around personal encounters – a child, a family member, a close friend coming out.
  • hese encounters matter because a personal connection comes with a substantial store of trust.
  • We don’t simply trust people as educated experts in a field – we rely on their goodwill. And this is why trust, rather than mere reliability, is the key concept
  • goodwill is a general feature of a person’s character. If I demonstrate goodwill in action, then you have some reason to think that I also have goodwill in matters of thought and knowledge.
  • f one can demonstrate goodwill to an echo-chambered member – as Stevenson did with Black – then perhaps one can start to pierce that echo chamber.
  • the path I’m describing is a winding, narrow and fragile one. There is no guarantee that such trust can be established, and no clear path to its being established systematically.
  • what we’ve found here isn’t an escape route at all. It depends on the intervention of another. This path is not even one an echo-chamber member can trigger on her own; it is only a whisper-thin hope for rescue from the outside.
Javier E

The Israel-Hamas War Shows Just How Broken Social Media Has Become - The Atlantic - 0 views

  • major social platforms have grown less and less relevant in the past year. In response, some users have left for smaller competitors such as Bluesky or Mastodon. Some have simply left. The internet has never felt more dense, yet there seem to be fewer reliable avenues to find a signal in all the noise. One-stop information destinations such as Facebook or Twitter are a thing of the past. The global town square—once the aspirational destination that social-media platforms would offer to all of us—lies in ruins, its architecture choked by the vines and tangled vegetation of a wild informational jungle
  • Musk has turned X into a deepfake version of Twitter—a facsimile of the once-useful social network, altered just enough so as to be disorienting, even terrifying.
  • At the same time, Facebook’s user base began to erode, and the company’s transparency reports revealed that the most popular content circulating on the platform was little more than viral garbage—a vast wasteland of CBD promotional content and foreign tabloid clickbait.
  • ...4 more annotations...
  • What’s left, across all platforms, is fragmented. News and punditry are everywhere online, but audiences are siloed; podcasts are more popular than ever, and millions of younger people online have turned to influencers and creators on Instagram and especially TikTok as trusted sources of news.
  • Social media, especially Twitter, has sometimes been an incredible news-gathering tool; it has also been terrible and inefficient, a game of do your own research that involves batting away bullshit and parsing half truths, hyperbole, outright lies, and invaluable context from experts on the fly. Social media’s greatest strength is thus its original sin: These sites are excellent at making you feel connected and informed, frequently at the expense of actually being informed.
  • At the center of these pleas for a Twitter alternative is a feeling that a fundamental promise has been broken. In exchange for our time, our data, and even our well-being, we uploaded our most important conversations onto platforms designed for viral advertising—all under the implicit understanding that social media could provide an unparalleled window to the world.
  • What comes next is impossible to anticipate, but it’s worth considering the possibility that the centrality of social media as we’ve known it for the past 15 years has come to an end—that this particular window to the world is being slammed shut.
Javier E

For Chat-Based AI, We Are All Once Again Tech Companies' Guinea Pigs - WSJ - 0 views

  • The companies touting new chat-based artificial-intelligence systems are running a massive experiment—and we are the test subjects.
  • In this experiment, Microsoft, MSFT -2.18% OpenAI and others are rolling out on the internet an alien intelligence that no one really understands, which has been granted the ability to influence our assessment of what’s true in the world. 
  • Companies have been cautious in the past about unleashing this technology on the world. In 2019, OpenAI decided not to release an earlier version of the underlying model that powers both ChatGPT and the new Bing because the company’s leaders deemed it too dangerous to do so, they said at the time.
  • ...26 more annotations...
  • Microsoft leaders felt “enormous urgency” for it to be the company to bring this technology to market, because others around the world are working on similar tech but might not have the resources or inclination to build it as responsibly, says Sarah Bird, a leader on Microsoft’s responsible AI team.
  • One common starting point for such models is what is essentially a download or “scrape” of most of the internet. In the past, these language models were used to try to understand text, but the new generation of them, part of the revolution in “generative” AI, uses those same models to create texts by trying to guess, one word at a time, the most likely word to come next in any given sequence.
  • Wide-scale testing gives Microsoft and OpenAI a big competitive edge by enabling them to gather huge amounts of data about how people actually use such chatbots. Both the prompts users input into their systems, and the results their AIs spit out, can then be fed back into a complicated system—which includes human content moderators paid by the companies—to improve it.
  • , being first to market with a chat-based AI gives these companies a huge initial lead over companies that have been slower to release their own chat-based AIs, such as Google.
  • rarely has an experiment like Microsoft and OpenAI’s been rolled out so quickly, and at such a broad scale.
  • Among those who build and study these kinds of AIs, Mr. Altman’s case for experimenting on the global public has inspired responses ranging from raised eyebrows to condemnation.
  • The fact that we’re all guinea pigs in this experiment doesn’t mean it shouldn’t be conducted, says Nathan Lambert, a research scientist at the AI startup Huggingface.
  • “I would kind of be happier with Microsoft doing this experiment than a startup, because Microsoft will at least address these issues when the press cycle gets really bad,” says Dr. Lambert. “I think there are going to be a lot of harms from this kind of AI, and it’s better people know they are coming,” he adds.
  • Others, particularly those who study and advocate for the concept of “ethical AI” or “responsible AI,” argue that the global experiment Microsoft and OpenAI are conducting is downright dangerous
  • Celeste Kidd, a professor of psychology at University of California, Berkeley, studies how people acquire knowledge
  • Her research has shown that people learning about new things have a narrow window in which they form a lasting opinion. Seeing misinformation during this critical initial period of exposure to a new concept—such as the kind of misinformation that chat-based AIs can confidently dispense—can do lasting harm, she says.
  • Dr. Kidd likens OpenAI’s experimentation with AI to exposing the public to possibly dangerous chemicals. “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,”
  • Part of the challenge with AI chatbots is that they can sometimes simply make things up. Numerous examples of this tendency have been documented by users of both ChatGPT and OpenA
  • These models also tend to be riddled with biases that may not be immediately apparent to users. For example, they can express opinions gleaned from the internet as if they were verified facts
  • When millions are exposed to these biases across billions of interactions, this AI has the potential to refashion humanity’s views, at a global scale, says Dr. Kidd.
  • OpenAI has talked publicly about the problems with these systems, and how it is trying to address them. In a recent blog post, the company said that in the future, users might be able to select AIs whose “values” align with their own.
  • “We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” the post said.
  • Eliminating made-up information and bias from chat-based search engines is impossible given the current state of the technology, says Mark Riedl, a professor at Georgia Institute of Technology who studies artificial intelligence
  • He believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment,” he adds. 
  • in other areas of human endeavor—from new drugs and new modes of transportation to advertising and broadcast media—we have standards for what can and cannot be unleashed on the public. No such standards exist for AI, says Dr. Riedl.
  • To modify these AIs so that they produce outputs that humans find both useful and not-offensive, engineers often use a process called “reinforcement learning through human feedback.
  • that’s a fancy way of saying that humans provide input to the raw AI algorithm, often by simply saying which of its potential responses to a query are better—and also which are not acceptable at all.
  • Microsoft’s and OpenAI’s globe-spanning experiments on millions of people are yielding a fire hose of data for both companies. User-entered prompts and the AI-generated results are fed back through a network of paid human AI trainers to further fine-tune the models,
  • Huggingface’s Dr. Lambert says that any company, including his own, that doesn’t have this river of real-world usage data helping it improve its AI is at a huge disadvantage
  • In chatbots, in some autonomous-driving systems, in the unaccountable AIs that decide what we see on social media, and now, in the latest applications of AI, again and again we are the guinea pigs on which tech companies are testing new technology.
  • It may be the case that there is no other way to roll out this latest iteration of AI—which is already showing promise in some areas—at scale. But we should always be asking, at times like these: At what price?
Javier E

Elusive 'Einstein' Solves a Longstanding Math Problem - The New York Times - 0 views

  • after a decade of failed attempts, David Smith, a self-described shape hobbyist of Bridlington in East Yorkshire, England, suspected that he might have finally solved an open problem in the mathematics of tiling: That is, he thought he might have discovered an “einstein.”
  • In less poetic terms, an einstein is an “aperiodic monotile,” a shape that tiles a plane, or an infinite two-dimensional flat surface, but only in a nonrepeating pattern. (The term “einstein” comes from the German “ein stein,” or “one stone” — more loosely, “one tile” or “one shape.”)
  • Your typical wallpaper or tiled floor is part of an infinite pattern that repeats periodically; when shifted, or “translated,” the pattern can be exactly superimposed on itself
  • ...18 more annotations...
  • An aperiodic tiling displays no such “translational symmetry,” and mathematicians have long sought a single shape that could tile the plane in such a fashion. This is known as the einstein problem.
  • black and white squares also can make weird nonperiodic patterns, in addition to the familiar, periodic checkerboard pattern. “It’s really pretty trivial to be able to make weird and interesting patterns,” he said. The magic of the two Penrose tiles is that they make only nonperiodic patterns — that’s all they can do.“But then the Holy Grail was, could you do with one — one tile?” Dr. Goodman-Strauss said.
  • now a new paper — by Mr. Smith and three co-authors with mathematical and computational expertise — proves Mr. Smith’s discovery true. The researchers called their einstein “the hat,
  • “The most significant aspect for me is that the tiling does not clearly fall into any of the familiar classes of structures that we understand.”
  • “I’m always messing about and experimenting with shapes,” said Mr. Smith, 64, who worked as a printing technician, among other jobs, and retired early. Although he enjoyed math in high school, he didn’t excel at it, he said. But he has long been “obsessively intrigued” by the einstein problem.
  • Sir Roger found the proofs “very complicated.” Nonetheless, he was “extremely intrigued” by the einstein, he said: “It’s a really good shape, strikingly simple.”
  • The simplicity came honestly. Mr. Smith’s investigations were mostly by hand; one of his co-authors described him as an “imaginative tinkerer.”
  • When in November he found a tile that seemed to fill the plane without a repeating pattern, he emailed Craig Kaplan, a co-author and a computer scientist at the University of Waterloo.
  • “It was clear that something unusual was happening with this shape,” Dr. Kaplan said. Taking a computational approach that built on previous research, his algorithm generated larger and larger swaths of hat tiles. “There didn’t seem to be any limit to how large a blob of tiles the software could construct,”
  • The first step, Dr. Kaplan said, was to “define a set of four ‘metatiles,’ simple shapes that stand in for small groupings of one, two, or four hats.” The metatiles assemble into four larger shapes that behave similarly. This assembly, from metatiles to supertiles to supersupertiles, ad infinitum, covered “larger and larger mathematical ‘floors’ with copies of the hat,” Dr. Kaplan said. “We then show that this sort of hierarchical assembly is essentially the only way to tile the plane with hats, which turns out to be enough to show that it can never tile periodically.”
  • some might wonder whether this is a two-tile, not one-tile, set of aperiodic monotiles.
  • Dr. Goodman-Strauss had raised this subtlety on a tiling listserv: “Is there one hat or two?” The consensus was that a monotile counts as such even using its reflection. That leaves an open question, Dr. Berger said: Is there an einstein that will do the job without reflection?
  • “the hat” was not a new geometric invention. It is a polykite — it consists of eight kites. (Take a hexagon and draw three lines, connecting the center of each side to the center of its opposite side; the six shapes that result are kites.)
  • “It’s likely that others have contemplated this hat shape in the past, just not in a context where they proceeded to investigate its tiling properties,” Dr. Kaplan said. “I like to think that it was hiding in plain sight.”
  • Incredibly, Mr. Smith later found a second einstein. He called it “the turtle” — a polykite made of not eight kites but 10. It was “uncanny,” Dr. Kaplan said. He recalled feeling panicked; he was already “neck deep in the hat.”
  • Dr. Myers, who had done similar computations, promptly discovered a profound connection between the hat and the turtle. And he discerned that, in fact, there was an entire family of related einsteins — a continuous, uncountable infinity of shapes that morph one to the next.
  • this einstein family motivated the second proof, which offers a new tool for proving aperiodicity. The math seemed “too good to be true,” Dr. Myers said in an email. “I wasn’t expecting such a different approach to proving aperiodicity — but everything seemed to hold together as I wrote up the details.”
  • Mr. Smith was amazed to see the research paper come together. “I was no help, to be honest.” He appreciated the illustrations, he said: “I’m more of a pictures person.”
Javier E

Who is Andrew Tate, the misogynist hero to millions of young men? | The Economist - 0 views

  • what sets Mr Tate apart from other alt-right social-media personalities and previous anti-feminist online movements is the extent to which his views have found a ready audience among teenage boys.
  • In 2021 Mr Tate established Hustlers University, an online platform where young men could take courses in business and investing for $49.99 a month. It also gave students financial rewards for promoting Mr Tate’s misogynist ideas via a now-suspended affiliate marketing programme. Thanks to a continuing stream of fan-generated content, his views have proliferated on social media even though most platforms have banned his accounts.
  • Part of the reason why Mr Tate has found success specifically on TikTok is that its algorithm is uniquely predictive, appearing not only to rely on the content users watch and recommend, but making assumptions about their potential interests
  • ...1 more annotation...
  • That has made him the most popular influencer among American Gen-Zers, according to a twice-yearly survey of 14,500 of the country’s teenage boys and girls by Piper Sandler, a finance company that researches consumer data. Teachers have reported boys as young as 11 praising and emulating him.
Javier E

Opinion | Tesla suffers from the boss's addiction to Twitter - The Washington Post - 0 views

  • For some perspective on what’s happening with Elon Musk and Twitter, I suggest spending a few minutes familiarizing yourself with one of Twitter’s sillier episodes from the past, a fight that erupted almost a year ago between the “shape rotators” of Silicon Valley and the “wordcels” (aspersion intended) of journalism and related professions. Many of the combatants were, at first, merely fighting over which group should have higher social status (theirs), but the episode also highlighted real divisions between West Coast and East — math and verbal, free-speech culture and safety culture, people who make things happen and people who talk about them afterward.
  • For years now, conflict between the two groups has been boiling over onto social media, into courtrooms and onto the pages of major news outlets. Team Shape Rotator believes Team Wordcel is parasitic and dangerous, ballyragging institutions into curbing both free speech and innovation in the name of safety. Team “Stop calling me a Wordcel” sees its opponents as self-centered and reckless, disrupting and mean-meming their way toward some vaguely imagined doom.
  • his audacity seems to be backfiring, as of course did Napoleon’s eventually.
  • ...5 more annotations...
  • You can think of Musk’s acquisition of Twitter as the latest sortie, a takeover of the ultimate wordcel site by the world’s most successful shape rotator.
  • more likely, he fell prey to a different delusion, one in which the shape rotators and the wordcels are united: thinking of Twitter in terms of words and arguments, as a “digital public square” where vital questions are hashed out. It is that, sometimes, but that’s not what it’s designed for. It’s designed to maximize engagement, which is to say, it’s an addiction machine for the highly verbal.
  • Both groups theoretically understand what the machine is doing — the wordcels write endless articles about bad algorithms, and the shape rotators build them. But both nonetheless talk as though they’re saving the world even as they compulsively follow the programming. The shape rotators bait the wordcels because that’s what makes the machine spit out more rewarding likes and retweets. We wordcels return the favor for the same reason.
  • Musk could theoretically rework Twitter’s architecture to downrank provocation and make it less addictive. But of course, that would make it a less profitable business
  • More to the point, the reason he bought it is that he, like his critics, is hooked on it the way it is now. Unfortunately for Tesla shareholders, Musk has now put himself in the position of a dealer who can spend all day getting high on his own supply.
Javier E

For Lee Tilghman, There Is Life After Influencing - The New York Times - 0 views

  • At her first full-time job since leaving influencing, the erstwhile smoothie-bowl virtuoso Lee Tilghman stunned a new co-worker with her enthusiasm for the 9-to-5 grind.
  • The co-worker pulled her aside that first morning, wanting to impress upon her the stakes of that decision. “This is terrible,” he told her. “Like, I’m at a desk.”“You don’t get it,” Ms. Tilghman remembered saying. “You think you’re a slave, but you’re not.” He had it backward, she added. “When you’re an influencer, then you have chains on.’”
  • In the late 2010s, for a certain subset of millennial women, Ms. Tilghman was wellness culture, a warm-blooded mood board of Outdoor Voices workout sets, coconut oil and headstands. She had earned north of $300,000 a year — and then dropped more than 150,000 followers, her entire management team, and most of her savings to become an I.R.L. person.
  • ...8 more annotations...
  • The corporate gig, as a social media director for a tech platform, was a revelation. “I could just show up to work and do work,” Ms. Tilghman said. After she was done, she could leave. She didn’t have to be a brand. There’s no comments section at an office job.
  • In 2019, a Morning Consult report found that 54 percent of Gen Z and millennial Americans were interested in becoming influencers. (Eighty-six percent said they would be willing to post sponsored content for money.)
  • If social media has made audiences anxious, it’s driving creators to the brink. In 2021, the TikTok breakout star Charli D’Amelio said she had “lost the passion” for posting videos. A few months later, Erin Kern announced to her 600,000 Instagram followers that she would be deactivating her account @cottonstem; she had been losing her hair, and her doctors blamed work-induced stress
  • Other influencers faded without fanfare — teens whose mental health had taken too much of a hit and amateur influencers who stopped posting after an algorithm tweak tanked their metrics. Some had been at this for a decade or more, starting at 12 or 14 or 19.
  • She posted less, testing out new identities that she hoped wouldn’t touch off the same spiral that wellness had. There were dancing videos, dog photos, interior design. None of it stuck. (“You can change the niche, but you’re still going to be performing your life for content,” she explained over lunch.)
  • Ms. Tilghman’s problem — as the interest in the workshop, which she decided to cap at 15, demonstrated — is that she has an undeniable knack for this. In 2022, she started a Substack to continue writing, thinking of it as a calling card while she applied to editorial jobs; it soon amassed 20,000 subscribers. It once had a different name, but now it’s called “Offline Time.” The paid tier costs $5 a month.
  • Casey Lewis, who helms the After School newsletter about Gen Z consumer trends, predicts more pivots and exits. TikTok has elevated creators faster than other platforms and burned them out quicker, she said.
  • Ms. Lewis expects a swell of former influencers taking jobs with P.R. agencies, marketing firms and product development conglomerates. She pointed out that creators have experience not just in video and photo editing, but in image management, crisis communication and rapid response. “Those skills do transfer,” she said.
Javier E

A Marketplace of Girl Influencers Managed by Moms and Stalked by Men - The New York Times - 0 views

  • Thousands of accounts examined by The Times offer disturbing insights into how social media is reshaping childhood, especially for girls, with direct parental encouragement and involvement.
  • Some parents are the driving force behind the sale of photos, exclusive chat sessions and even the girls’ worn leotards and cheer outfits to mostly unknown followers. The most devoted customers spend thousands of dollars nurturing the underage relationships.
  • The large audiences boosted by men can benefit the families, The Times found. The bigger followings look impressive to brands and bolster chances of getting discounts, products and other financial incentives, and the accounts themselves are rewarded by Instagram’s algorithm with greater visibility on the platform, which in turn attracts more followers.
  • ...8 more annotations...
  • One calculation performed by an audience demographics firm found 32 million connections to male followers among the 5,000 accounts examined by The Times.
  • Interacting with the men opens the door to abuse. Some flatter, bully and blackmail girls and their parents to get racier and racier images. The Times monitored separate exchanges on Telegram, the messaging app, where men openly fantasize about sexually abusing the children they follow on Instagram and extol the platform for making the images so readily available.
  • The so-called creator economy surpasses $250 billion worldwide, according to Goldman Sachs, with U.S. brands spending more than $5 billion a year on influencers.
  • The troubling interactions on Instagram come as social media companies increasingly dominate the cultural landscape and the internet is seen as a career path of its own.
  • Nearly one in three preteens lists influencing as a career goal, and 11 percent of those born in Generation Z, between 1997 and 2012, describe themselves as influencers.
  • “It’s like a candy store
  • Health and technology experts have recently cautioned that social media presents a “profound risk of harm” for girls. Constant comparisons to their peers and face-altering filters are driving negative feelings of self-worth and promoting objectification of their bodies, researchers found.
  • he pursuit of online fame, particularly through Instagram, has supercharged the often toxic phenomenon, The Times found, encouraging parents to commodify their children’s images. Some of the child influencers earn six-figure incomes, according to interviews.
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
Javier E

Opinion | Gen Z slang terms are influenced by incels - The Washington Post - 0 views

  • Incels (as they’re known) are infamous for sharing misogynistic attitudes and bitter hostility toward the romantically successful
  • somehow, incels’ hateful rhetoric has bizarrely become popularized via Gen Z slang.
  • it’s common to hear the suffix “pilled” as a funny way to say “convinced into a lifestyle.” Instead of “I now love eating burritos,” for instance, one might say, “I’m so burritopilled.” “Pilled” as a suffix comes from a scene in 1999’s “The Matrix” where Neo (Keanu Reeves) had to choose between the red pill and the blue pill, but the modern sense is formed through analogy with “blackpilled,” an online slang term meaning “accepting incel ideology.
  • ...11 more annotations...
  • the popular suffix “maxxing” for “maximizing” (e.g., “I’m burritomaxxing” instead of “I’m eating a lot of burritos”) is drawn from the incel idea of “looksmaxxing,” or “maximizing attractiveness” through surgical or cosmetic techniques.
  • Then there’s the word “cucked” for “weakened” or “emasculated.” If the taqueria is out of burritos, you might be “tacocucked,” drawing on the incel idea of being sexually emasculated by more attractive “chads.
  • These slang terms developed on 4chan precisely because of the site’s anonymity. Since users don’t have identifiable aliases, they signal their in-group status through performative fluency in shared slang
  • there’s a dark side to the site as well — certain boards, like /r9k/, are known breeding grounds for incel discussion, and the source of the incel words being used today.
  • finally, we have the word “sigma” for “assertive male,” which comes from an incel’s desired position outside the social hierarchy.
  • Memes and niche vocabulary become a form of cultural currency, fueling their proliferation.
  • From there, those words filter out to more mainstream websites such as Reddit and eventually become popularized by viral memes and TikTok trends. Social media algorithms do the rest of the work by curating recommended content for viewers.
  • Because these terms often spread in ironic contexts, people find them funny, engage with them and are eventually rewarded with more memes featuring incel vocabulary.
  • Creators are not just aware of this process — they are directly incentivized to abet it. We know that using trending audio helps our videos perform better and that incorporating popular metadata with hashtags or captions will help us reach wider audiences
  • kids aren’t actually saying “cucked” because they’re “blackpilled”; they’re using it for the same reason all kids use slang: It helps them bond as a group. And what are they bonding over? A shared mockery of incel ideas.
  • These words capture an important piece of the Gen Z zeitgeist. We should therefore be aware of them, keeping in mind that they’re being used ironically.
« First ‹ Previous 141 - 159 of 159
Showing 20 items per page