Skip to main content

Home/ TOK Friends/ Group items matching "ad" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
21More

Elusive 'Einstein' Solves a Longstanding Math Problem - The New York Times - 0 views

  • after a decade of failed attempts, David Smith, a self-described shape hobbyist of Bridlington in East Yorkshire, England, suspected that he might have finally solved an open problem in the mathematics of tiling: That is, he thought he might have discovered an “einstein.”
  • In less poetic terms, an einstein is an “aperiodic monotile,” a shape that tiles a plane, or an infinite two-dimensional flat surface, but only in a nonrepeating pattern. (The term “einstein” comes from the German “ein stein,” or “one stone” — more loosely, “one tile” or “one shape.”)
  • Your typical wallpaper or tiled floor is part of an infinite pattern that repeats periodically; when shifted, or “translated,” the pattern can be exactly superimposed on itself
  • ...18 more annotations...
  • An aperiodic tiling displays no such “translational symmetry,” and mathematicians have long sought a single shape that could tile the plane in such a fashion. This is known as the einstein problem.
  • black and white squares also can make weird nonperiodic patterns, in addition to the familiar, periodic checkerboard pattern. “It’s really pretty trivial to be able to make weird and interesting patterns,” he said. The magic of the two Penrose tiles is that they make only nonperiodic patterns — that’s all they can do.“But then the Holy Grail was, could you do with one — one tile?” Dr. Goodman-Strauss said.
  • now a new paper — by Mr. Smith and three co-authors with mathematical and computational expertise — proves Mr. Smith’s discovery true. The researchers called their einstein “the hat,
  • “The most significant aspect for me is that the tiling does not clearly fall into any of the familiar classes of structures that we understand.”
  • “I’m always messing about and experimenting with shapes,” said Mr. Smith, 64, who worked as a printing technician, among other jobs, and retired early. Although he enjoyed math in high school, he didn’t excel at it, he said. But he has long been “obsessively intrigued” by the einstein problem.
  • Sir Roger found the proofs “very complicated.” Nonetheless, he was “extremely intrigued” by the einstein, he said: “It’s a really good shape, strikingly simple.”
  • The simplicity came honestly. Mr. Smith’s investigations were mostly by hand; one of his co-authors described him as an “imaginative tinkerer.”
  • When in November he found a tile that seemed to fill the plane without a repeating pattern, he emailed Craig Kaplan, a co-author and a computer scientist at the University of Waterloo.
  • “It was clear that something unusual was happening with this shape,” Dr. Kaplan said. Taking a computational approach that built on previous research, his algorithm generated larger and larger swaths of hat tiles. “There didn’t seem to be any limit to how large a blob of tiles the software could construct,”
  • The first step, Dr. Kaplan said, was to “define a set of four ‘metatiles,’ simple shapes that stand in for small groupings of one, two, or four hats.” The metatiles assemble into four larger shapes that behave similarly. This assembly, from metatiles to supertiles to supersupertiles, ad infinitum, covered “larger and larger mathematical ‘floors’ with copies of the hat,” Dr. Kaplan said. “We then show that this sort of hierarchical assembly is essentially the only way to tile the plane with hats, which turns out to be enough to show that it can never tile periodically.”
  • some might wonder whether this is a two-tile, not one-tile, set of aperiodic monotiles.
  • Dr. Goodman-Strauss had raised this subtlety on a tiling listserv: “Is there one hat or two?” The consensus was that a monotile counts as such even using its reflection. That leaves an open question, Dr. Berger said: Is there an einstein that will do the job without reflection?
  • “the hat” was not a new geometric invention. It is a polykite — it consists of eight kites. (Take a hexagon and draw three lines, connecting the center of each side to the center of its opposite side; the six shapes that result are kites.)
  • “It’s likely that others have contemplated this hat shape in the past, just not in a context where they proceeded to investigate its tiling properties,” Dr. Kaplan said. “I like to think that it was hiding in plain sight.”
  • Incredibly, Mr. Smith later found a second einstein. He called it “the turtle” — a polykite made of not eight kites but 10. It was “uncanny,” Dr. Kaplan said. He recalled feeling panicked; he was already “neck deep in the hat.”
  • Dr. Myers, who had done similar computations, promptly discovered a profound connection between the hat and the turtle. And he discerned that, in fact, there was an entire family of related einsteins — a continuous, uncountable infinity of shapes that morph one to the next.
  • this einstein family motivated the second proof, which offers a new tool for proving aperiodicity. The math seemed “too good to be true,” Dr. Myers said in an email. “I wasn’t expecting such a different approach to proving aperiodicity — but everything seemed to hold together as I wrote up the details.”
  • Mr. Smith was amazed to see the research paper come together. “I was no help, to be honest.” He appreciated the illustrations, he said: “I’m more of a pictures person.”
19More

(1) A Brief History of Media and Audiences and Twitter and The Bulwark - 0 views

  • In the old days—and here I mean even as recently as 2000 or 2004—audiences were built around media institutions. The New York Times had an audience. The New Yorker had an audience. The Weekly Standard had an audience.
  • If you were a writer, you got access to these audiences by contributing to the institutions. No one cared if you, John Smith, wrote a piece about Al Gore. But if your piece about Al Gore appeared in Washington Monthly, then suddenly you had an audience.
  • There were a handful of star writers for whom this wasn’t true: Maureen Dowd, Tom Wolfe, Joan Didion. Readers would follow these stars wherever they appeared. But they were the exceptions to the rule. And the only way to ascend to such exalted status was by writing a lot of great pieces for established institutions and slowly assembling your audience from theirs.
  • ...16 more annotations...
  • The internet stripped institutions of their gatekeeping powers, thus making it possible for anyone to publish—and making it inevitable that many writers would create audiences independent of media institutions.
  • The internet destroyed the apprenticeship system that had dominated American journalism for generations. Under the old system, an aspiring writer took a low-level job at a media institution and worked her way up the ladder until she was trusted enough to write.
  • Under the new system, people started their careers writing outside of institutions—on personal blogs—and then were hired by institutions on the strength of their work.
  • In practice, these outsiders were primarily hired not on the merits of their work, but because of the size of their audience.
  • what it really did was transform the nature of audiences. Once the internet existed it became inevitable that institutions would see their power to hold audiences wane while individual writers would have their power to build personal audiences explode.
  • this meant that institutions would begin to hire based on the size of a writer’s audience. Which meant that writers’ overriding professional imperative was to build an audience, since that was the key to advancement.
  • Twitter killed the blog and lowered the barrier to entry for new writers from “Must have a laptop, the ability to navigate WordPress, and the capacity to write paragraphs” to “Do you have an iPhone and the ability to string 20 words together? With or without punctuation?”
  • If you were able to build a big enough audience on Twitter, then media institutions fell all over themselves trying to hire you—because they believed that you would then bring your audience to them.2
  • If you were a writer for the Washington Post, or Wired, or the Saginaw Express, you had to build your own audience not to advance, but to avoid being replaced.
  • For journalists, audience wasn’t just status—it was professional capital. In fact, it was the most valuable professional capital.
  • Everything we just talked about was driven by the advertising model of media, which prized pageviews and unique users above all else. About a decade ago, that model started to fray around the edges,3 which caused a shift to the subscription model.
  • Today, if you’re a subscription publication, what Twitter gives you is growth opportunity. Twitter’s not the only channel for growth—there are lots of others, from TikTok to LinkedIn to YouTube to podcasts to search. But it’s an important one.
  • Twitter’s attack on Substack was an attack on the subscription model of journalism itself.
  • since media has already seen the ad-based model fall apart, it’s not clear what the alternative will be if the subscription model dies, too.
  • All of which is why having a major social media platform run by a capricious bad actor is suboptimal.
  • And why I think anyone else who’s concerned about the future of media ought to start hedging against Twitter. None of the direct hedges—Post, Mastodon, etc.—are viable yet. But tech history shows that these shifts can happen fairly quickly.
11More

For Lee Tilghman, There Is Life After Influencing - The New York Times - 0 views

  • At her first full-time job since leaving influencing, the erstwhile smoothie-bowl virtuoso Lee Tilghman stunned a new co-worker with her enthusiasm for the 9-to-5 grind.
  • The co-worker pulled her aside that first morning, wanting to impress upon her the stakes of that decision. “This is terrible,” he told her. “Like, I’m at a desk.”“You don’t get it,” Ms. Tilghman remembered saying. “You think you’re a slave, but you’re not.” He had it backward, she added. “When you’re an influencer, then you have chains on.’”
  • In the late 2010s, for a certain subset of millennial women, Ms. Tilghman was wellness culture, a warm-blooded mood board of Outdoor Voices workout sets, coconut oil and headstands. She had earned north of $300,000 a year — and then dropped more than 150,000 followers, her entire management team, and most of her savings to become an I.R.L. person.
  • ...8 more annotations...
  • The corporate gig, as a social media director for a tech platform, was a revelation. “I could just show up to work and do work,” Ms. Tilghman said. After she was done, she could leave. She didn’t have to be a brand. There’s no comments section at an office job.
  • In 2019, a Morning Consult report found that 54 percent of Gen Z and millennial Americans were interested in becoming influencers. (Eighty-six percent said they would be willing to post sponsored content for money.)
  • If social media has made audiences anxious, it’s driving creators to the brink. In 2021, the TikTok breakout star Charli D’Amelio said she had “lost the passion” for posting videos. A few months later, Erin Kern announced to her 600,000 Instagram followers that she would be deactivating her account @cottonstem; she had been losing her hair, and her doctors blamed work-induced stress
  • Other influencers faded without fanfare — teens whose mental health had taken too much of a hit and amateur influencers who stopped posting after an algorithm tweak tanked their metrics. Some had been at this for a decade or more, starting at 12 or 14 or 19.
  • She posted less, testing out new identities that she hoped wouldn’t touch off the same spiral that wellness had. There were dancing videos, dog photos, interior design. None of it stuck. (“You can change the niche, but you’re still going to be performing your life for content,” she explained over lunch.)
  • Ms. Tilghman’s problem — as the interest in the workshop, which she decided to cap at 15, demonstrated — is that she has an undeniable knack for this. In 2022, she started a Substack to continue writing, thinking of it as a calling card while she applied to editorial jobs; it soon amassed 20,000 subscribers. It once had a different name, but now it’s called “Offline Time.” The paid tier costs $5 a month.
  • Casey Lewis, who helms the After School newsletter about Gen Z consumer trends, predicts more pivots and exits. TikTok has elevated creators faster than other platforms and burned them out quicker, she said.
  • Ms. Lewis expects a swell of former influencers taking jobs with P.R. agencies, marketing firms and product development conglomerates. She pointed out that creators have experience not just in video and photo editing, but in image management, crisis communication and rapid response. “Those skills do transfer,” she said.
17More

'Meta-Content' Is Taking Over the Internet - The Atlantic - 0 views

  • Jenn, however, has complicated things by adding an unexpected topic to her repertoire: the dangers of social media. She recently spoke about disengaging from it for her well-being; she also posted an Instagram Story about the risks of ChatGPT
  • and, in none other than a YouTube video, recommended Neil Postman’s Amusing Ourselves to Death, a seminal piece of media critique from 1985 that denounces television’s reduction of life to entertainment.
  • (Her other book recommendations included Stolen Focus, by Johann Hari, and Recapture the Rapture, by Jamie Wheal.)
  • ...14 more annotations...
  • Social-media platforms are “preying on your insecurities; they’re preying on your temptations,” Jenn explained to me in an interview that shifted our parasocial connection, at least for an hour, to a mere relationship. “And, you know, I do play a role in this.” Jenn makes money through aspirational advertising, after all—a familiar part of any influencer’s job.
  • She’s pro–parasocial relationships, she explains to the camera, but only if we remain aware that we’re in one. “This relationship does not replace existing friendships, existing relationships,” she emphasizes. “This is all supplementary. Like, it should be in addition to your life, not a replacement.” I sat there watching her talk about parasocial relationships while absorbing the irony of being in one with her.
  • The open acknowledgment of social media’s inner workings, with content creators exposing the foundations of their content within the content itself, is what Alice Marwick, an associate communications professor at the University of North Carolina at Chapel Hill, described to me as “meta-content.”
  • Meta-content can be overt, such as the vlogger Casey Neistat wondering, in a vlog, if vlogging your life prevents you from being fully present in it;
  • But meta-content can also be subtle: a vlogger walking across the frame before running back to get the camera. Or influencers vlogging themselves editing the very video you’re watching, in a moment of space-time distortion.
  • Viewers don’t seem to care. We keep watching, fully accepting the performance. Perhaps that’s because the rise of meta-content promises a way to grasp authenticity by acknowledging artifice; especially in a moment when artifice is easier to create than ever before, audiences want to know what’s “real” and what isn’
  • “The idea of a space where you can trust no sources, there’s no place to sort of land, everything is put into question, is a very unsettling, unsatisfying way to live.
  • So we continue to search for, as Murray observes, the “agreed-upon things, our basic understandings of what’s real, what’s true.” But when the content we watch becomes self-aware and even self-critical, it raises the question of whether we can truly escape the machinations of social media. Maybe when we stare directly into the abyss, we begin to enjoy its company.
  • “The difference between BeReal and the social-media giants isn’t the former’s relationship to truth but the size and scale of its deceptions.” BeReal users still angle their camera and wait to take their daily photo at an aesthetic time of day. The snapshots merely remind us how impossible it is to stop performing online.
  • Jenn’s concern over the future of the internet stems, in part, from motherhood. She recently had a son, Lennon (whose first birthday party I watched on YouTube), and worries about the digital world he’s going to inherit.
  • Back in the age of MySpace, she had her own internet friends and would sneak out to parking lots at 1 a.m. to meet them in real life: “I think this was when technology was really used as a tool to connect us.” Now, she explained, it’s beginning to ensnare us. Posting content online is no longer a means to an end so much as the end itself.
  • We used to view influencers’ lives as aspirational, a reality that we could reach toward. Now both sides acknowledge that they’re part of a perfect product that the viewer understands is unattainable and the influencer acknowledges is not fully real.
  • “I forgot to say this to her in the interview, but I truly think that my videos are less about me and more of a reflection of where you are currently … You are kind of reflecting on your own life and seeing what resonates [with] you, and you’re discarding what doesn’t. And I think that’s what’s beautiful about it.”
  • meta-content is fundamentally a compromise. Recognizing the delusion of the internet doesn’t alter our course within it so much as remind us how trapped we truly are—and how we wouldn’t have it any other way.
35More

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
« First ‹ Previous 521 - 525 of 525
Showing 20 items per page