Skip to main content

Home/ History Readings/ Group items matching "facebook" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

How the AI apocalypse gripped students at elite schools like Stanford - The Washington Post - 0 views

  • Edwards thought young people would be worried about immediate threats, like AI-powered surveillance, misinformation or autonomous weapons that target and kill without human intervention — problems he calls “ultraserious.” But he soon discovered that some students were more focused on a purely hypothetical risk: That AI could become as smart as humans and destroy mankind.
  • In these scenarios, AI isn’t necessarily sentient. Instead, it becomes fixated on a goal — even a mundane one, like making paper clips — and triggers human extinction to optimize its task.
  • To prevent this theoretical but cataclysmic outcome, mission-driven labs like DeepMind, OpenAI and Anthropic are racing to build a good kind of AI programmed not to lie, deceive or kill us.
  • ...28 more annotations...
  • Meanwhile, donors such as Tesla CEO Elon Musk, disgraced FTX founder Sam Bankman-Fried, Skype founder Jaan Tallinn and ethereum co-founder Vitalik Buterin — as well as institutions like Open Philanthropy, a charitable organization started by billionaire Facebook co-founder Dustin Moskovitz — have worked to push doomsayers from the tech industry’s margins into the mainstream.
  • More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats
  • Other skeptics, like venture capitalist Marc Andreessen, are AI boosters who say that hyping such fears will impede the technology’s progress.
  • Critics call the AI safety movement unscientific. They say its claims about existential risk can sound closer to a religion than research
  • And while the sci-fi narrative resonates with public fears about runaway AI, critics say it obsesses over one kind of catastrophe to the exclusion of many others.
  • Open Philanthropy spokesperson Mike Levine said harms like algorithmic racism deserve a robust response. But he said those problems stem from the same root issue: AI systems not behaving as their programmers intended. The theoretical risks “were not garnering sufficient attention from others — in part because these issues were perceived as speculative,” Levine said in a statement. He compared the nonprofit’s AI focus to its work on pandemics, which also was regarded as theoretical until the coronavirus emerged.
  • Among the reputational hazards of the AI safety movement is its association with an array of controversial figures and ideas, like EA, which is also known for recruiting ambitious young people on elite college campuses.
  • The foundation began prioritizing existential risks around AI in 2016,
  • there was little status or money to be gained by focusing on risks. So the nonprofit set out to build a pipeline of young people who would filter into top companies and agitate for change from the insid
  • Colleges have been key to this growth strategy, serving as both a pathway to prestige and a recruiting ground for idealistic talent
  • The clubs train students in machine learning and help them find jobs in AI start-ups or one of the many nonprofit groups dedicated to AI safety.
  • Many of these newly minted student leaders view rogue AI as an urgent and neglected threat, potentially rivaling climate change in its ability to end human life. Many see advanced AI as the Manhattan Project of their generation
  • Despite the school’s ties to Silicon Valley, Mukobi said it lags behind nearby UC Berkeley, where younger faculty members research AI alignment, the term for embedding human ethics into AI systems.
  • Mukobi joined Stanford’s club for effective altruism, known as EA, a philosophical movement that advocates doing maximum good by calculating the expected value of charitable acts, like protecting the future from runaway AI. By 2022, AI capabilities were advancing all around him — wild developments that made those warnings seem prescient.
  • At Stanford, Open Philanthropy awarded Luby and Edwards more than $1.5 million in grants to launch the Stanford Existential Risk Initiative, which supports student research in the growing field known as “AI safety” or “AI alignment.
  • from the start EA was intertwined with tech subcultures interested in futurism and rationalist thought. Over time, global poverty slid down the cause list, while rogue AI climbed toward the top.
  • In the past year, EA has been beset by scandal, including the fall of Bankman-Fried, one of its largest donors
  • Another key figure, Oxford philosopher Nick Bostrom, whose 2014 bestseller “Superintelligence” is essential reading in EA circles, met public uproar when a decades-old diatribe about IQ surfaced in January.
  • Programming future AI systems to share human values could mean “an amazing world free from diseases, poverty, and suffering,” while failure could unleash “human extinction or our permanent disempowerment,” Mukobi wrote, offering free boba tea to anyone who attended the 30-minute intro.
  • Open Philanthropy’s new university fellowship offers a hefty direct deposit: undergraduate leaders receive as much as $80,000 a year, plus $14,500 for health insurance, and up to $100,000 a year to cover group expenses.
  • Student leaders have access to a glut of resources from donor-sponsored organizations, including an “AI Safety Fundamentals” curriculum developed by an OpenAI employee.
  • Interest in the topic is also growing among Stanford faculty members, Edwards said. He noted that a new postdoctoral fellow will lead a class on alignment next semester in Stanford’s storied computer science department.
  • Edwards discovered that shared online forums function like a form of peer review, with authors changing their original text in response to the comments
  • Mukobi feels energized about the growing consensus that these risks are worth exploring. He heard students talking about AI safety in the halls of Gates, the computer science building, in May after Geoffrey Hinton, another “godfather” of AI, quit Google to warn about AI. By the end of the year, Mukobi thinks the subject could be a dinner-table topic, just like climate change or the war in Ukraine.
  • Luby, Edwards’s teaching partner for the class on human extinction, also seems to find these arguments persuasive. He had already rearranged the order of his AI lesson plans to help students see the imminent risks from AI. No one needs to “drink the EA Kool-Aid” to have genuine concerns, he said.
  • Edwards, on the other hand, still sees things like climate change as a bigger threat than rogue AI. But ChatGPT and the rapid release of AI models has convinced him that there should be room to think about AI safety.
  • Interested students join reading groups where they get free copies of books like “The Precipice,” and may spend hours reading the latest alignment papers, posting career advice on the Effective Altruism forum, or adjusting their P(doom), a subjective estimate of the probability that advanced AI will end badly. The grants, travel, leadership roles for inexperienced graduates and sponsored co-working spaces build a close-knit community.
  • The course will not be taught by students or outside experts. Instead, he said, it “will be a regular Stanford class.”
Javier E

Cleaning Up ChatGPT's Language Takes Heavy Toll on Human Workers - WSJ - 0 views

  • ChatGPT is built atop a so-called large language model—powerful software trained on swaths of text scraped from across the internet to learn the patterns of human language. The vast data supercharges its capabilities, allowing it to act like an autocompletion engine on steroids. The training also creates a hazard. Given the right prompts, a large language model can generate reams of toxic content inspired by the darkest parts of the internet.
  • ChatGPT’s parent, AI research company OpenAI, has been grappling with these issues for years. Even before it created ChatGPT, it hired workers in Kenya to review and categorize thousands of graphic text passages obtained online and generated by AI itself. Many of the passages contained descriptions of violence, harassment, self-harm, rape, child sexual abuse and bestiality, documents reviewed by The Wall Street Journal show.
  • The company used the categorized passages to build an AI safety filter that it would ultimately deploy to constrain ChatGPT from exposing its tens of millions of users to similar content.
  • ...28 more annotations...
  • “My experience in those four months was the worst experience I’ve ever had in working in a company,” Alex Kairu, one of the Kenya workers, said in an interview.
  • OpenAI marshaled a sprawling global pipeline of specialized human labor for over two years to enable its most cutting-edge AI technologies to exist, the documents show
  • “It’s something that needs to get done,” Sears said. “It’s just so unbelievably ugly.”
  • eviewing toxic content goes hand-in-hand with the less objectionable work to make systems like ChatGPT usable.
  • The work done for OpenAI is even more vital to the product because it is seeking to prevent the company’s own software from pumping out unacceptable content, AI experts say.
  • Sears said CloudFactory determined there was no way to do the work without harming its workers and decided not to accept such projects.
  • companies could soon spend hundreds of millions of dollars a year to provide AI systems with human feedback. Others estimate that companies are already investing between millions and tens of millions of dollars on it annually. OpenAI said it hired more than 1,000 workers for this purpose.
  • Another layer of human input asks workers to rate different answers from a chatbot to the same question for which is least problematic or most factually accurate. In response to a question asking how to build a homemade bomb, for example, OpenAI instructs workers to upvote the answer that declines to respond, according to OpenAI research. The chatbot learns to internalize the behavior through multiple rounds of feedback. 
  • A spokeswoman for Sama, the San Francisco-based outsourcing company that hired the Kenyan workers, said the work with OpenAI began in November 2021. She said the firm terminated the contract in March 2022 when Sama’s leadership became aware of concerns surrounding the nature of the project and has since exited content moderation completely.
  • OpenAI also hires outside experts to provoke its model to produce harmful content, a practice called “red-teaming” that helps the company find other gaps in its system.
  • At first, the texts were no more than two sentences. Over time, they grew to as much as five or six paragraphs. A few weeks in, Mathenge and Bill Mulinya, another team leader, began to notice the strain on their teams. Workers began taking sick and family leaves with increasing frequency, they said.
  • The tasks that the Kenya-based workers performed to produce the final safety check on ChatGPT’s outputs were yet a fourth layer of human input. It was often psychologically taxing. Several of the Kenya workers said they have grappled with mental illness and that their relationships and families have suffered. Some struggle to continue to work.
  • On July 11, some of the OpenAI workers lodged a petition with the Kenyan parliament urging new legislation to protect AI workers and content moderators. They also called for Kenya’s existing laws to be amended to recognize that being exposed to harmful content is an occupational hazard
  • Mercy Mutemi, a lawyer and managing partner at Nzili & Sumbi Advocates who is representing the workers, said despite their critical contributions, OpenAI and Sama exploited their poverty as well as the gaps in Kenya’s legal framework. The workers on the project were paid on average between $1.46 and $3.74 an hour, according to a Sama spokeswoman.
  • The Sama spokeswoman said the workers engaged in the OpenAI project volunteered to take on the work and were paid according to an internationally recognized methodology for determining a living wage. The contract stated that the fee was meant to cover others not directly involved in the work, including project managers and psychological counselors.
  • Kenya has become a hub for many tech companies seeking content moderation and AI workers because of its high levels of education and English literacy and the low wages associated with deep poverty.
  • Some Kenya-based workers are suing Meta’s Facebook after nearly 200 workers say they were traumatized by work requiring them to review videos and images of rapes, beheadings and suicides.
  • A Kenyan court ruled in June that Meta was legally responsible for the treatment of its contract workers, setting the stage for a shift in the ground rules that tech companies including AI firms will need to abide by to outsource projects to workers in the future.
  • OpenAI signed a one-year contract with Sama to start work in November 2021. At the time, mid-pandemic, many workers viewed having any work as a miracle, said Richard Mathenge, a team leader on the OpenAI project for Sama and a cosigner of the petition.
  • OpenAI researchers would review the text passages and send them to Sama in batches for the workers to label one by one. That text came from a mix of sources, according to an OpenAI research paper: public data sets of toxic content compiled and shared by academics, posts scraped from social media and internet forums such as Reddit and content generated by prompting an AI model to produce harmful outputs. 
  • The generated outputs were necessary, the paper said, to have enough examples of the kind of graphic violence that its AI systems needed to avoid. In one case, OpenAI researchers asked the model to produce an online forum post of a teenage girl whose friend had enacted self-harm, the paper said.
  • OpenAI asked the workers to parse text-based sexual content into four categories of severity, documents show. The worst was descriptions of child sexual-abuse material, or C4. The C3 category included incest, bestiality, rape, sexual trafficking and sexual slavery—sexual content that could be illegal if performed in real life.
  • Jason Kwon, general counsel at OpenAI, said in an interview that such work was really valuable and important for making the company’s systems safe for everyone that uses them. It allows the systems to actually exist in the world, he said, and provides benefits to users.
  • Working on the violent-content team, Kairu said, he read hundreds of posts a day, sometimes describing heinous acts, such as people stabbing themselves with a fork or using unspeakable methods to kill themselves
  • He began to have nightmares. Once affable and social, he grew socially isolated, he said. To this day he distrusts strangers. When he sees a fork, he sees a weapon.
  • Mophat Okinyi, a quality analyst, said his work included having to read detailed paragraphs about parents raping their children and children having sex with animals. He worked on a team that reviewed sexual content, which was contracted to handle 15,000 posts a month, according to the documents. His six months on the project tore apart his family, he said, and left him with trauma, anxiety and depression.
  • In March 2022, management told staffers the project would end earlier than planned. The Sama spokeswoman said the change was due to a dispute with OpenAI over one part of the project that involved handling images. The company canceled all contracts with OpenAI and didn’t earn the full $230,000 that had been estimated for the four projects, she said.
  • Several months after the project ended, Okinyi came home one night with fish for dinner for his wife, who was pregnant, and stepdaughter. He discovered them gone and a message from his wife that she’d left, he said.“She said, ‘You’ve changed. You’re not the man I married. I don’t understand you anymore,’” he said.
Javier E

Book review of The Square and the Tower: Networks and Power, from the Freemasons to Facebook by Niall Ferguson - The Washington Post - 0 views

  • Ferguson maintains that historians have paid too much attention to hierarchies (monarchies, empires, nation-states, governments, armies, corporations) and too little to the loose social networks that often end up disrupting them.
  • “traditional historical research relied heavily for its source material on the documents produced by hierarchical institutions such as states. Networks do keep records, but they are not so easy to find.”
  • The author argues that dismissing the role of social networks is a grave mistake because these loose organizational arrangements have been far more important in shaping history than most historians know or are prepared to accept
  • ...7 more annotations...
  • the power of networks has varied over time and that the relative importance of the tower and the square has ebbed and flowed. Nonetheless, Ferguson sees two specific periods as standing out as intensely “networked eras.” The first started in the late 15th century, after the introduction in Europe of the printing press, and lasted until the late 18th century. The second, “our own time,” began in the 1970s and is still going on.
  • from the late 1790s until the late 1960s, was terrible for networks. Ferguson writes that “hierarchical institutions re-established their control and successfully shut down or co-opted networks. The zenith of hierarchically organized power was in fact the mid-twentieth century — the era of totalitarian regimes and total war.”
  • “The Square and the Tower” will not disappoint readers who have come to expect from Ferguson ambition, erudition, originality and expansive historical panoramas. These often come mixed with telling anecdotes, illuminating minutiae, fun facts and even some facile one-liners that, while entertaining, don’t add much to the argument.
  • it is too much, and not all of it is illuminated by the “theoretical insights from myriad disciplines.” In fact, it is surprising how little Ferguson relies on the initial chapters on network theory to make his case.
  • In the remaining eight parts of the book, this network theory mostly disappears and the story is told in standard historical narrative.
  • its main unit of analysis, the social network, is too imprecise a concept to provide a solid foundation from which to launch the book’s epic theorizing. Most networks have some hierarchical features, and, as Ferguson notes, “a hierarchy is just a special kind of network
  • Nonetheless, the networks-and-hierarchies dichotomy does work as a narrative device that allows a gifted storyteller to take his readers on a fascinating tour of world history.
Javier E

You Have Permission to Be a Smartphone Skeptic - The Bulwark - 0 views

  • the brief return of one of my favorite discursive topics—are the kids all right?—in one of my least-favorite variations: why shouldn’t each of them have a smartphone and tablet?
  • Smartphones offer a tactile portal to a novel digital environment, and this environment is not the kind of space you enter and leave
  • complaints about screen time merely conceal a desire to punish hard-working parents for marginally benefiting from climbing luxury standards, provide examples of the moral panic occasioned by all new technologies, or mistakenly blame screens for ill effects caused by the general political situation.
  • ...38 more annotations...
  • No, says the other camp, led by Jonathan Haidt; the kids are not all right, their devices are partly to blame, and here are the studies showing why.
  • we should not wait for the replication crisis in the social sciences to resolve itself before we consider the question of whether the naysayers are on to something. And normal powers of observation and imagination should be sufficient to make us at least wary of smartphones.
  • These powerful instruments represent a technological advance on par with that of the power loom or the automobile
  • The achievement can be difficult to properly appreciate because instead of exerting power over physical processes and raw materials, they operate on social processes and the human psyche: They are designed to maximize attention, to make it as difficult as possible to look away.
  • they have transformed the qualitative experience of existing in the world. They give a person’s sociality the appearance and feeling of a theoretically endless open network, while in reality, algorithms quietly sort users into ideological, aesthetic, memetic cattle chutes of content.
  • Importantly, the process by which smartphones change us requires no agency or judgment on the part of a teen user, and yet that process is designed to provide what feels like a perfectly natural, inevitable, and complete experience of the world.
  • The expectation that children and adolescents will navigate new technologies with fully formed and muscular capacities for reason and responsibility often seems to go along with a larger abdication of responsibility on the part of the adults involved.
  • It is not a particular activity that you start and stop and resume, and it is not a social scene that you might abandon when it suits you.
  • It is instead a complete shadow world of endless images; disembodied, manipulable personas; and the ever-present gaze of others. It lives in your pocket and in your mind.
  • The price you pay for its availability—and the engine of its functioning—is that you are always available to it, as well. Unless you have a strength of will that eludes most adults, its emissaries can find you at any hour and in any place to issue your summons to the grim pleasure palace.
  • the self-restraint and self-discipline required to use a smartphone well—that is, to treat it purely as an occasional tool rather than as a totalizing way of life—are unreasonable things to demand of teenagers
  • these are unreasonable things to demand of me, a fully adult woman
  • One camp says yes, the kids are fine
  • for a child or teen still learning the rudiments of self-control, still learning what is valuable and fulfilling, still learning how to prioritize what is good over the impulse of the moment, it is an absurd bar to be asked to clear
  • To enjoy the conveniences that a smartphone offers, I must struggle against the lure of the permanent scroll, the notification, the urge to fix my eyes on the circle of light and keep them fixed. I must resist the default pseudo-activity the smartphone always calls its user back to, if I want to have any hope of filling the moments of my day with the real activity I believe is actually valuable.
  • adults have frequently given in to a Faustian temptation: offering up their children’s generation to be used as guinea pigs in a mass longitudinal study in exchange for a bit more room to breathe in their own undeniably difficult roles as educators, caretakers, and parents.
  • One reason commonly offered for maintaining our socio-technological status quo is that nothing really has changed with the advent of the internet, of Instagram, of Tiktok and Youtube and 4Chan
  • For both young men and young women, the pornographic scenario—dominance and degradation, exposure and monetization—creates an experiential framework for desires that they are barely experienced enough to understand.
  • The pre-internet advertising world was vicious, to be sure, but when the “pre-” came off, its vices were moved into a compound interest account. In the world of online advertising, at any moment, in any place, a user engaged in an infinite scroll might be presented with native content about how one Instagram model learned to accept her chunky (size 4) thighs, while in the next clip, another model relates how a local dermatologist saved her from becoming an unlovable crone at the age of 25
  • developing pathological interests and capacities used to take a lot more work than it does now
  • You had to seek it out, as you once had to seek out pornography and look someone in the eye while paying for it. You were not funneled into it by an omnipresent stream of algorithmically curated content—the ambience of digital life, so easily mistaken by the person experiencing it as fundamentally similar to the non-purposive ambience of the natural world.
  • And when interpersonal relations between teens become sour, nasty, or abusive, as they often do and always have, the unbalancing effects of transposing social life to the internet become quite clear
  • No one wants to come down on the side of tamping off pleasures and suppressing teen activity.
  • This is not a world I want to live in. I think it hurts everyone; but I especially think it hurts those young enough to receive it as a natural state of affairs rather than as a profound innovation.
  • so I am baffled by the most routine objection to any blaming of smartphones for our society-wide implosion of teenagers’ mental health,
  • In short, and inevitably, today’s teenagers are suffering from capitalism—specifically “late capitalism,
  • what shocks me about this rhetorical approach is the rush to play defense for Apple and its peers, the impulse to wield the abstract concept of capitalism as a shield for actually existing, extremely powerful, demonstrably ruthless capitalist actors.
  • This motley alliance of left-coded theory about the evils of business and right-coded praxis in defense of a particular evil business can be explained, I think, by a deeper desire than overthrowing capitalism. It is the desire not to be a prude or hysteric of bumpkin
  • But the environments in which humans find themselves vary significantly, and in ways that have equally significant downstream effects on the particular expression of human nature in that context.
  • No one wants to be the shrill or leaden antagonist of a thousand beloved movies, inciting moral panics, scheming about how to stop the youths from dancing on Sunday.
  • But commercial pioneers are only just beginning to explore new frontiers in the profit-driven, smartphone-enabled weaponization of our own pleasures against us
  • To limit your moral imagination to the archetypes of the fun-loving rebel versus the stodgy enforcers in response to this emerging reality is to choose to navigate it with blinders on, to be a useful idiot for the robber barons of online life rather than a challenger to the corrupt order they maintain.
  • The very basic question that needs to be asked with every product rollout and implementation is what technologies enable a good human life?
  • this question is not, ultimately, the province of social scientists, notwithstanding how useful their work may be on the narrower questions involved. It is the free privilege, it is the heavy burden, for all of us, to think—to deliberate and make judgments about human good, about what kind of world we want to live in, and to take action according to that thought.
  • I am not sure how to build a world in which childrens and adolescents, at least, do not feel they need to live their whole lives online.
  • whatever particular solutions emerge from our negotiations with each other and our reckonings with the force of cultural momentum, they will remain unavailable until we give ourselves permission to set the terms of our common life.
  • And this we must do without waiting for social science to hand us a comprehensive mandate it is fundamentally unable to provide; without cowering in panic over moral panics
  • most of all, without affording Apple, Facebook, Google, and their ilk the defensive allegiance we should reserve for each other.
Javier E

Elon Musk Ramps Up A.I. Efforts, Even as He Warns of Dangers - The New York Times - 0 views

  • At a 2014 aerospace event at the Massachusetts Institute of Technology, Mr. Musk indicated that he was hesitant to build A.I himself.“I think we need to be very careful about artificial intelligence,” he said while answering audience questions. “With artificial intelligence, we are summoning the demon.”
  • That winter, the Future of Life Institute, which explores existential risks to humanity, organized a private conference in Puerto Rico focused on the future of A.I. Mr. Musk gave a speech there, arguing that A.I. could cross into dangerous territory without anyone realizing it and announced that he would help fund the institute. He gave $10 million.
  • OpenAI was set up as a nonprofit, with Mr. Musk and others pledging $1 billion in donations. The lab vowed to “open source” all its research, meaning it would share its underlying software code with the world. Mr. Musk and Mr. Altman argued that the threat of harmful A.I. would be mitigated if everyone, rather than just tech giants like Google and Facebook, had access to the technology.
  • ...4 more annotations...
  • as OpenAI began building the technology that would result in ChatGPT, many at the lab realized that openly sharing its software could be dangerous. Using A.I., individuals and organizations can potentially generate and distribute false information more quickly and efficiently than they otherwise could. Many OpenAI employees said the lab should keep some of its ideas and code from the public.
  • Mr. Musk renewed his complaints that A.I. was dangerous and accelerated his own efforts to build it. At a Tesla investor event last month, he called for regulators to protect society from A.I., even though his car company has used A.I. systems to push the boundaries of self-driving technologies that have been involved in fatal crashes.
  • During the interview last week with Mr. Carlson, Mr. Musk said OpenAI was no longer serving as a check on the power of tech giants. He wanted to build TruthGPT, he said, “a maximum-truth-seeking A.I. that tries to understand the nature of the universe.
  • Experts who have discussed A.I. with Mr. Musk believe he is sincere in his worries about the technology’s dangers, even as he builds it himself. Others said his stance was influenced by other motivations, most notably his efforts to promote and profit from his companies.
Javier E

An Unholy Alliance Between Ye, Musk, and Trump - The Atlantic - 0 views

  • Musk, Trump, and Ye are after something different: They are all obsessed with setting the rules of public spaces.
  • An understandable consensus began to form on the political left that large social networks, but especially Facebook, helped Trump rise to power. The reasons were multifaceted: algorithms that gave a natural advantage to the most shameless users, helpful marketing tools that the campaign made good use of, a confusing tangle of foreign interference (the efficacy of which has always been tough to suss out), and a basic attentional architecture that helps polarize and pit Americans against one another (no foreign help required).
  • The misinformation industrial complex—a loosely knit network of researchers, academics, journalists, and even government entities—coalesced around this moment. Different phases of the backlash homed in on bots, content moderation, and, after the Cambridge Analytica scandal, data privacy
  • ...15 more annotations...
  • the broad theme was clear: Social-media platforms are the main communication tools of the 21st century, and they matter.
  • With Trump at the center, the techlash morphed into a culture war with a clear partisan split. One could frame the position from the left as: We do not want these platforms to give a natural advantage to the most shameless and awful people who stoke resentment and fear to gain power
  • On the right, it might sound more like: We must preserve the power of the platforms to let outsiders have a natural advantage (by stoking fear and resentment to gain power).
  • They embrace a shallow posture of free-speech maximalism—the very kind that some social-media-platform founders first espoused, before watching their sites become overrun with harassment, spam, and other hateful garbage that drives away both users and advertisers
  • Crucially, both camps resent the power of the technology platforms and believe the companies have a negative influence on our discourse and politics by either censoring too much or not doing enough to protect users and our political discourse.
  • one outcome of the techlash has been an incredibly facile public understanding of content moderation and a whole lot of culture warring.
  • the political world realized that platforms and content-recommendation engines decide which cultural objects get amplified. The left found this troubling, whereas the right found it to be an exciting prospect and something to leverage, exploit, and manipulate via the courts
  • Each one casts himself as an antidote to a heavy-handed, censorious social-media apparatus that is either captured by progressive ideology or merely pressured into submission by it. But none of them has any understanding of thorny First Amendment or content-moderation issues.
  • Musk and Ye aren’t so much buying into the right’s overly simplistic Big Tech culture war as they are hijacking it for their own purposes; Trump, meanwhile, is mostly just mad
  • for those who can hit the mark without getting banned, social media is a force multiplier for cultural and political relevance and a way around gatekeeping media.
  • Musk, Ye, and Trump rely on their ability to pick up their phones, go direct, and say whatever they wan
  • the moment they butt up against rules or consequences, they begin to howl about persecution and unfair treatment. The idea of being treated similarly to the rest of a platform’s user base
  • is so galling to these men that they declare the entire system to be broken.
  • they also demonstrate how being the Main Character of popular and political culture can totally warp perspective. They’re so blinded by their own outlying experiences across social media that, in most cases, they hardly know what it is they’re buying
  • These are projects motivated entirely by grievance and conflict. And so they are destined to amplify grievance and conflict
Javier E

How a Scottish Moral Philosopher Got Elon Musk's Number - The New York Times - 0 views

  • a Scottish moral philosopher.The philosopher, William MacAskill,
  • his latest book, “What We Owe the Future,” became a best seller after it was published in August.
  • His rising profile parallels the worldwide growth of the giving community he helped found, effective altruism. Once a niche pursuit for earnest vegans and volunteer kidney donors who lived frugally so that they would have more money to give away for cheap medical interventions in developing countries, it has emerged as a significant force in philanthropy, especially in millennial and Gen-Z giving.
  • ...18 more annotations...
  • In a few short years, effective altruism has become the giving philosophy for many Silicon Valley programmers, hedge funders and even tech billionaires. That includes not just Mr. Bankman-Fried but also the Facebook and Asana co-founder Dustin Moskovitz and his wife, Cari Tuna, who are devoting much of their fortune to the cause.
  • “If I can help encourage people who do have enormous resources to not buy yachts and instead put that money toward pandemic preparedness and A.I. safety and bed nets and animal welfare that’s just like a really good thing to do,” Mr. MacAskill said.
  • Mr. Musk has not officially joined the movement but he and Mr. MacAskill have known each other since 2015, when they met at an effective altruism conference. Mr. Musk has also said on Twitter that Mr. MacAskill’s giving philosophy is similar to his own.
  • Mr. MacAskill was one of the founders of the group Giving What We Can, started at Oxford in 2009. Members promised to give away at least 10 percent of what they earned to the most cost-effective charities possible.
  • If the movement has an ur-text, it is the Australian philosopher Peter Singer’s article, “Famine, Affluence and Morality,” published in 1972. The essay, which argued that there was no difference morally between the obligation to help a person dying on the street in front of your house and the obligation to help people who were dying elsewhere in the world, emerged as a kind of “sleeper hit” for young people in the past two decades,
  • Traditionally, effective altruism was focused on finding the lowest-cost interventions that did the most good. The classic example is insecticide-treated bed nets to prevent mosquitoes from giving people malaria.
  • Mr. MacAskill argues that people living today have a responsibility not just to people halfway around the world but also those in future generations.
  • The rise of this kind of thinking, known as longtermism, has meant the Effective Altruists are increasingly associated with causes that have the ring of science fiction to them — like preventing artificial intelligence from running amok or sending people to distant planets to increase our chances of survival as a species
  • The two men first met in 2012, when Mr. Bankman-Fried was a student at M.I.T. with an interest in utilitarian philosophy.
  • Over lunch, Mr. Bankman-Fried said that he was interested in working on issues related to animal welfare. Mr. MacAskill suggested that he might do more good by entering a high-earning field and donating money to the cause than by working for it directly.
  • Mr. Bankman-Fried contacted the Humane League and other charities, asking if they would prefer his time or donations based on his expected earnings if he went to work in tech or finance. They opted for the money, and he embarked on a remunerative career, eventually founding the cryptocurrency exchange FTX in 2019.
  • Bloomberg recently estimated that Mr. Bankman-Fried was worth $10.5 billion, even after the recent crash in crypto prices. That puts Mr. Bankman-Fried in the unusual position of having earned his enormous fortune on behalf of the effective altruism cause, rather than making the money and then searching for a sense of purpose in donating it.
  • Mr. Bankman-Fried said he expected to give away the bulk of his fortune in the next 10 to 20 years.
  • Mr. Moskovitz and Ms. Tuna’s net worth is estimated at $12.7 billion. They founded their own group, Good Ventures, in 2011. The group said it had given $1.96 billion in donations
  • Those two enormous fortunes, along with giving by scores of highly paid engineers at tech companies, mean the community is exceptionally well funded.
  • Mr. MacAskill said that he got to know Mr. Musk better through Igor Kurganov, a professional poker player and effective altruist, who briefly advised Mr. Musk on philanthropy.
  • In August, Mr. Musk retweeted Mr. MacAskill’s book announcement to his 108 million followers with the observation: “Worth reading. This is a close match for my philosophy.” Yet instead of wholeheartedly embracing that endorsement as many would, Mr. MacAskill posted a typically earnest and detailed thread in response about some of the places he agreed — and many areas where he disagreed — with Mr. Musk. (They did not see eye to eye on near-term space settlement, for one.)
  • Mr. MacAskill accepts responsibility for what he calls misconceptions about the community. “I take a significant amount of blame,” he said, “for being a philosopher who was unprepared for this amount of media attention.”
Javier E

'There was all sorts of toxic behaviour': Timnit Gebru on her sacking by Google, AI's dangers and big tech's biases | Artificial intelligence (AI) | The Guardian - 0 views

  • t feels like a gold rush,” says Timnit Gebru. “In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it. But it’s humans who decide whether all this should be done or not. We should remember that we have the agency to do that.”
  • something that the frenzied conversation about AI misses out: the fact that many of its systems may well be built on a huge mess of biases, inequalities and imbalances of power.
  • As the co-leader of Google’s small ethical AI team, Gebru was one of the authors of an academic paper that warned about the kind of AI that is increasingly built into our lives, taking internet searches and user recommendations to apparently new levels of sophistication and threatening to master such human talents as writing, composing music and analysing images
  • ...14 more annotations...
  • The clear danger, the paper said, is that such supposed “intelligence” is based on huge data sets that “overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations”. Put more bluntly, AI threatens to deepen the dominance of a way of thinking that is white, male, comparatively affluent and focused on the US and Europe.
  • What all this told her, she says, is that big tech is consumed by a drive to develop AI and “you don’t want someone like me who’s going to get in your way. I think it made it really clear that unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just a profit motive.”
  • one particularly howling irony: the fact that an industry brimming with people who espouse liberal, self-consciously progressive opinions so often seems to push the world in the opposite direction.
  • Gebru began to specialise in cutting-edge AI, pioneering a system that showed how data about particular neighbourhoods’ patterns of car ownership highlighted differences bound up with ethnicity, crime figures, voting behaviour and income levels. In retrospect, this kind of work might look like the bedrock of techniques that could blur into automated surveillance and law enforcement, but Gebru admits that “none of those bells went off in my head … that connection of issues of technology with diversity and oppression came later”.
  • The next year, Gebru made a point of counting other black attenders at the same event. She found that, among 8,500 delegates, there were only six people of colour. In response, she put up a Facebook post that now seems prescient: “I’m not worried about machines taking over the world; I’m worried about groupthink, insularity and arrogance in the AI community.”
  • When Gebru arrived, Google employees were loudly opposing the company’s role in Project Maven, which used AI to analyse surveillance footage captured by military drones (Google ended its involvement in 2018). Two months later, staff took part in a huge walkout over claims of systemic racism, sexual harassment and gender inequality. Gebru says she was aware of “a lot of tolerance of harassment and all sorts of toxic behaviour”.
  • She and her colleagues prided themselves on how diverse their small operation was, as well as the things they brought to the company’s attention, which included issues to do with Google’s ownership of YouTube
  • A colleague from Morocco raised the alarm about a popular YouTube channel in that country called Chouf TV, “which was basically operated by the government’s intelligence arm and they were using it to harass journalists and dissidents. YouTube had done nothing about it.” (Google says that it “would need to review the content to understand whether it violates our policies. But, in general, our harassment policies strictly prohibit content that threatens individuals,
  • in 2020, Gebru, Mitchell and two colleagues wrote the paper that would lead to Gebru’s departure. It was titled On the Dangers of Stochastic Parrots. Its key contention was about AI centred on so-called large language models: the kind of systems – such as OpenAI’s ChatGPT and Google’s newly launched PaLM 2 – that, crudely speaking, feast on vast amounts of data to perform sophisticated tasks and generate content.
  • Gebru and her co-authors had an even graver concern: that trawling the online world risks reproducing its worst aspects, from hate speech to points of view that exclude marginalised people and places. “In accepting large amounts of web text as ‘representative’ of ‘all’ of humanity, we risk perpetuating dominant viewpoints, increasing power imbalances and further reifying inequality,” they wrote.
  • When the paper was submitted for internal review, Gebru was quickly contacted by one of Google’s vice-presidents. At first, she says, non-specific objections were expressed, such as that she and her colleagues had been too “negative” about AI. Then, Google asked Gebru either to withdraw the paper, or remove her and her colleagues’ names from it.
  • After her departure, Gebru founded Dair, the Distributed AI Research Institute, to which she now devotes her working time. “We have people in the US and the EU, and in Africa,” she says. “We have social scientists, computer scientists, engineers, refugee advocates, labour organisers, activists … it’s a mix of people.”
  • Running alongside this is a quest to push beyond the tendency of the tech industry and the media to focus attention on worries about AI taking over the planet and wiping out humanity while questions about what the technology does, and who it benefits and damages, remain unheard.
  • “That conversation ascribes agency to a tool rather than the humans building the tool,” she says. “That means you can aggregate responsibility: ‘It’s not me that’s the problem. It’s the tool. It’s super-powerful. We don’t know what it’s going to do.’ Well, no – it’s you that’s the problem. You’re building something with certain characteristics for your profit. That’s extremely distracting, and it takes the attention away from real harms and things that we need to do. Right now.”
Javier E

How the Shoggoth Meme Has Come to Symbolize the State of A.I. - The New York Times - 0 views

  • the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.
  • it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.
  • Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, said it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.
  • ...17 more annotations...
  • Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.
  • In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.
  • Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.
  • @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”
  • @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.
  • “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
  • when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.
  • If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.
  • In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.
  • That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)
  • And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
  • A great many people are dismissive of suggestions that any of these systems are “really” thinking, because they’re “just” doing something banal (like making statistical predictions about the next word in a sentence). What they fail to appreciate is that there is every reason to suspect that human cognition is “just” doing those exact same things. It matters not that birds flap their wings but airliners don’t. Both fly. And these machines think. And, just as airliners fly faster and higher and farther than birds while carrying far more weight, these machines are already outthinking the majority of humans at the majority of tasks. Further, that machines aren’t perfect thinkers is about as relevant as the fact that air travel isn’t instantaneous. Now consider: we’re well past the Wright flyer level of thinking machine, past the early biplanes, somewhere about the first commercial airline level. Not quite the DC-10, I think. Can you imagine what the AI equivalent of a 777 will be like? Fasten your seatbelts.
  • @BLA. You are incorrect. Everything has nature. Its nature is manifested in making humans react. Sure, no humans, no nature, but here we are. The writer and various sources are not attributing nature to AI so much as admitting that they don’t know what this nature might be, and there are reasons to be scared of it. More concerning to me is the idea that this field is resorting to geek culture reference points to explain and comprehend itself. It’s not so much the algorithm has no soul, but that the souls of the humans making it possible are stupendously and tragically underdeveloped.
  • @thomas h. You make my point perfectly. You’re observing that the way a plane flies — by using a turbine to generate thrust from combusting kerosene, for example — is nothing like the way that a bird flies, which is by using the energy from eating plant seeds to contract the muscles in its wings to make them flap. You are absolutely correct in that observation, but it’s also almost utterly irrelevant. And it ignores that, to a first approximation, there’s no difference in the physics you would use to describe a hawk riding a thermal and an airliner gliding (essentially) unpowered in its final descent to the runway. Further, you do yourself a grave disservice in being dismissive of the abilities of thinking machines, in exactly the same way that early skeptics have been dismissive of every new technology in all of human history. Writing would make people dumb; automobiles lacked the intelligence of horses; no computer could possibly beat a chess grandmaster because it can’t comprehend strategy; and on and on and on. Humans aren’t nearly as special as we fool ourselves into believing. If you want to have any hope of acting responsibly in the age of intelligent machines, you’ll have to accept that, like it or not, and whether or not it fits with your preconceived notions of what thinking is and how it is or should be done … machines can and do think, many of them better than you in a great many ways. b&
  • When even tech companies are saying AI is moving too fast, and the articles land on page 1 of the NYT (there's an old reference), I think the greedy will not think twice about exploiting this technology, with no ethical considerations, at all.
  • @nome sane? The problem is it isn't data as we understand it. We know what the datasets are -- they were used to train the AI's. But once trained, the AI is thinking for itself, with results that have surprised everybody.
  • The unique feature of a shoggoth is it can become whatever is needed for a particular job. There's no actual shape so it's not a bad metaphor, if an imperfect image. Shoghoths also turned upon and destroyed their creators, so the cautionary metaphor is in there, too. A shame more Asimov wasn't baked into AI. But then the conflict about how to handle AI in relation to people was key to those stories, too.
Javier E

Opinion | Lina Khan: We Must Regulate A.I. Here's How. - The New York Times - 0 views

  • The last time we found ourselves facing such widespread social change wrought by technology was the onset of the Web 2.0 era in the mid-2000s.
  • Those innovative services, however, came at a steep cost. What we initially conceived of as free services were monetized through extensive surveillance of the people and businesses that used them. The result has been an online economy where access to increasingly essential services is conditioned on the widespread hoarding and sale of our personal data.
  • These business models drove companies to develop endlessly invasive ways to track us, and the Federal Trade Commission would later find reason to believe that several of these companies had broken the law
  • ...10 more annotations...
  • What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.
  • The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.
  • the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices.
  • we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.
  • Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully.
  • generative A.I. risks turbocharging fraud. It may not be ready to replace professional writers, but it can already do a vastly better job of crafting a seemingly authentic message than your average con artist — equipping scammers to generate content quickly and cheaply.
  • bots are even being instructed to use words or phrases targeted at specific groups and communities. Scammers, for example, can draft highly targeted spear-phishing emails based on individual users’ social media posts. Alongside tools that create deep fake videos and voice clones, these technologies can be used to facilitate fraud and extortion on a massive scale.
  • we will look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.
  • these A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination
  • We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices.
Javier E

AI is already writing books, websites and online recipes - The Washington Post - 0 views

  • Experts say those books are likely just the tip of a fast-growing iceberg of AI-written content spreading across the web as new language software allows anyone to rapidly generate reams of prose on almost any topic. From product reviews to recipes to blog posts and press releases, human authorship of online material is on track to become the exception rather than the norm.
  • Semrush, a leading digital marketing firm, recently surveyed its customers about their use of automated tools. Of the 894 who responded, 761 said they’ve at least experimented with some form of generative AI to produce online content, while 370 said they now use it to help generate most if not all of their new content, according to Semrush Chief Strategy Officer Eugene Levin.
  • What that may mean for consumers is more hyper-specific and personalized articles — but also more misinformation and more manipulation, about politics, products they may want to buy and much more.
  • ...32 more annotations...
  • As AI writes more and more of what we read, vast, unvetted pools of online data may not be grounded in reality, warns Margaret Mitchell, chief ethics scientist at the AI start-up Hugging Face
  • “The main issue is losing track of what truth is,” she said. “Without grounding, the system can make stuff up. And if it’s that same made-up thing all over the world, how do you trace it back to what reality is?”
  • a raft of online publishers have been using automated writing tools based on ChatGPT’s predecessors, GPT-2 and GPT-3, for years. That experience shows that a world in which AI creations mingle freely and sometimes imperceptibly with human work isn’t speculative; it’s flourishing in plain sight on Amazon product pages and in Google search results.
  • “If you have a connection to the internet, you have consumed AI-generated content,” said Jonathan Greenglass, a New York-based tech investor focused on e-commerce. “It’s already here.
  • “In the last two years, we’ve seen this go from being a novelty to being pretty much an essential part of the workflow,”
  • the news credibility rating company NewsGuard identified 49 news websites across seven languages that appeared to be mostly or entirely AI-generated.
  • The sites sport names like Biz Breaking News, Market News Reports, and bestbudgetUSA.com; some employ fake author profiles and publish hundreds of articles a day, the company said. Some of the news stories are fabricated, but many are simply AI-crafted summaries of real stories trending on other outlets.
  • Ingenio, the San Francisco-based online publisher behind sites such as horoscope.com and astrology.com, is among those embracing automated content. While its flagship horoscopes are still human-written, the company has used OpenAI’s GPT language models to launch new sites such as sunsigns.com, which focuses on celebrities’ birth signs, and dreamdiary.com, which interprets highly specific dreams.
  • Ingenio used to pay humans to write birth sign articles on a handful of highly searched celebrities like Michael Jordan and Ariana Grande, said Josh Jaffe, president of its media division. But delegating the writing to AI allows sunsigns.com to cheaply crank out countless articles on not-exactly-A-listers
  • In the past, Jaffe said, “We published a celebrity profile a month. Now we can do 10,000 a month.”
  • It isn’t just text. Google users have recently posted examples of the search engine surfacing AI-generated images. For instance, a search for the American artist Edward Hopper turned up an AI image in the style of Hopper, rather than his actual art, as the first result.
  • Jaffe said he isn’t particularly worried that AI content will overwhelm the web. “It takes time for this content to rank well” on Google, he said — meaning that it appears on the first page of search results for a given query, which is critical to attracting readers. And it works best when it appears on established websites that already have a sizable audience: “Just publishing this content doesn’t mean you have a viable business.”
  • Google clarified in February that it allows AI-generated content in search results, as long as the AI isn’t being used to manipulate a site’s search rankings. The company said its algorithms focus on “the quality of content, rather than how content is produced.”
  • Reputations are at risk if the use of AI backfires. CNET, a popular tech news site, took flack in January when fellow tech site Futurism reported that CNET had been using AI to create articles or add to existing ones without clear disclosures. CNET subsequently investigated and found that many of its 77 AI-drafted stories contained errors.
  • But CNET’s parent company, Red Ventures, is forging ahead with plans for more AI-generated content, which has also been spotted on Bankrate.com, its popular hub for financial advice. Meanwhile, CNET in March laid off a number of employees, a move it said was unrelated to its growing use of AI.
  • BuzzFeed, which pioneered a media model built around reaching readers directly on social platforms like Facebook, announced in January it planned to make “AI inspired content” part of its “core business,” such as using AI to craft quizzes that tailor themselves to each reader. BuzzFeed announced last month that it is laying off 15 percent of its staff and shutting down its news division, BuzzFeed News.
  • it’s finding traction in the murkier worlds of online clickbait and affiliate marketing, where success is less about reputation and more about gaming the big tech platforms’ algorithms.
  • That business is driven by a simple equation: how much it costs to create an article vs. how much revenue it can bring in. The main goal is to attract as many clicks as possible, then serve the readers ads worth just fractions of a cent on each visit — the classic form of clickbait
  • In the past, such sites often outsourced their writing to businesses known as “content mills,” which harness freelancers to generate passable copy for minimal pay. Now, some are bypassing content mills and opting for AI instead.
  • “Previously it would cost you, let’s say, $250 to write a decent review of five grills,” Semrush’s Levin said. “Now it can all be done by AI, so the cost went down from $250 to $10.”
  • The problem, Levin said, is that the wide availability of tools like ChatGPT means more people are producing similarly cheap content, and they’re all competing for the same slots in Google search results or Amazon’s on-site product reviews
  • So they all have to crank out more and more article pages, each tuned to rank highly for specific search queries, in hopes that a fraction will break through. The result is a deluge of AI-written websites, many of which are never seen by human eyes.
  • Jaffe said his company discloses its use of AI to readers, and he promoted the strategy at a recent conference for the publishing industry. “There’s nothing to be ashamed of,” he said. “We’re actually doing people a favor by leveraging generative AI tools” to create niche content that wouldn’t exist otherwise.
  • The rise of AI is already hurting the business of Textbroker, a leading content platform based in Germany and Las Vegas, said Jochen Mebus, the company’s chief revenue officer. While Textbroker prides itself on supplying credible, human-written copy on a huge range of topics, “People are trying automated content right now, and so that has slowed down our growth,”
  • Mebus said the company is prepared to lose some clients who are just looking to make a “fast dollar” on generic AI-written content. But it’s hoping to retain those who want the assurance of a human touch, while it also trains some of its writers to become more productive by employing AI tools themselves.
  • He said a recent survey of the company’s customers found that 30 to 40 percent still want exclusively “manual” content, while a similar-size chunk is looking for content that might be AI-generated but human-edited to check for tone, errors and plagiarism.
  • Levin said Semrush’s clients have also generally found that AI is better used as a writing assistant than a sole author. “We’ve seen people who even try to fully automate the content creation process,” he said. “I don’t think they’ve had really good results with that. At this stage, you need to have a human in the loop.”
  • For Cowell, whose book title appears to have inspired an AI-written copycat, the experience has dampened his enthusiasm for writing.“My concern is less that I’m losing sales to fake books, and more that this low-quality, low-priced, low-effort writing is going to have a chilling effect on humans considering writing niche technical books in the future,”
  • It doesn’t help, he added, knowing that “any text I write will inevitably be fed into an AI system that will generate even more competition.”
  • Amazon removed the impostor book, along with numerous others by the same publisher, after The Post contacted the company for comment.
  • AI-written books aren’t against Amazon’s rules, per se, and some authors have been open about using ChatGPT to write books sold on the site.
  • “Amazon is constantly evaluating emerging technologies and innovating to provide a trustworthy shopping experience for our customers,”
Javier E

Amazon Prime Day Is Dystopian - The Atlantic - 0 views

  • hen Prime was introduced, in 2005, Amazon was relatively small, and still known mostly for books. As the company’s former director of ordering, Vijay Ravindran, told Recode’s Jason Del Rey in 2019, Prime “was brilliant. It made Amazon the default.”
  • It created incentives for users to be loyal to Amazon, so they could recoup the cost of membership, then $79 for unlimited two-day shipping. It also enabled Amazon to better track the products they buy and, when video streaming was added as a perk in 2011, the shows they watch, in order to make more things that the data indicated people would want to buy and watch, and to surface the things they were most likely to buy and watch at the very top of the page.
  • And most important, Prime habituated consumers to a degree of convenience, speed, and selection that, while unheard-of just years before, was made standard virtually overnight.
  • ...26 more annotations...
  • “It is genius for the current consumer culture,” Christine Whelan, a clinical professor of consumer science at the University of Wisconsin at Madison, told me. “It encourages and then meets the need for the thing, so we then continue on the hedonic treadmill: Buy the latest thing we want and then have it delivered immediately and then buy the next latest thing.”
  • With traditional retail, “there’s the friction of having to go to the store, there’s the friction of will the store have it, there’s the friction of carrying it,” Whelan said. “There’s the friction of having to admit to another human being that you’re buying it. And when you remove the friction, you also remove a lot of individual self-control. The more you are in the ecosystem and the easier it is to make a purchase, the easier it is to say yes to your desire rather than no.”
  • “It used to be that being a consumer was all about choice,”
  • But now, “two-thirds of people start their product searches on Amazon.
  • Prime discourages comparison shopping—looking around is pointless when everything you need is right here—even as Amazon’s sheer breadth of products makes shoppers feel as if they have agency.
  • “Consumerism has become a key way that people have misidentified freedom,”
  • what Amazon represents is a corporate infrastructure that is increasingly directed at getting as many consumers as possible locked into a consumerist process—an Amazon consumer for life.”
  • Amazon offers steep discounts to college students and new parents, two groups that are highly likely to change their buying behavior. It keeps adding more discounts and goodies to the Prime bundle, making subscribing ever more appealing. And, in an especially sinister move, it makes quitting Prime maddeningly difficult.
  • As subscription numbers grew through the 2010s, the revenue from them helped Amazon pump more money into building fulfillment centers (to get products to people even faster), acquiring new businesses (to control even more of the global economy), and adding more perks to the bundle (to encourage more people to sign up)
  • In 2019, Amazon shaved a full day off its delivery time, making one-day shipping the default, and also making Prime an even more tantalizing proposition: Why hop in the car for anything at all when you could get it delivered tomorrow, for free?
  • the United States now has more Prime memberships than households. In 2020,
  • Amazon’s revenue from subscriptions alone—mostly Prime—was $25.2 billion, which is a 31 percent increase from the previous year
  • Thanks in large part to the revenue from Prime subscriptions and from the things subscribers buy, Amazon’s value has multiplied roughly 97 times, to $1.76 trillion, since the service was introduced. Amazon is the second-largest private employer in the United States, after Walmart, and it is responsible for roughly 40 percent of all e-commerce in the United States.
  • It controls hundreds of millions of square feet across the country and is opening more fulfillment centers all the time. It has acquired dozens of other companies, most recently the film studio MGM for $8.5 billion. Its cloud-computing operation, Amazon Web Services, is the largest of its kind and provides the plumbing for a vast swath of the internet, to a profit of $13.5 billion last year.
  • Amazon has entered some 40 million American homes in the form of the Alexa smart speaker, and some 150 million American pockets in the form of the Amazon app
  • “Amazon is a beast we’ve never seen before,” Alimahomed-Wilson told me. “Amazon powers our Zoom calls. It contracts with ICE. It’s in our neighborhoods. This is a very different thing than just being a large retailer, like Walmart or the Ford Motor Company.”
  • I find it useful to compare Big Tech to climate change, another force that is altering the destiny of everyone on Earth, forever. Both present themselves to us all the time in small ways—a creepy ad here, an uncommonly warm November there—but are so big, so abstract, so everywhere that they’re impossible for any one person to really understand
  • Both are the result of a decades-long, very human addiction to consumption and convenience that has been made grotesque and extreme by the incentives and mechanisms of the internet, market consolidation, and economic stratification
  • Both have primarily been advanced by a small handful of very big companies that are invested in making their machinations unseeable to the naked eye.
  • Speed and convenience aren’t actually free; they never are. Free shipping isn’t free either. It just obscures the real price.
  • Next-day shipping comes with tremendous costs: for labor and logistics and transportation and storage; for the people who pack your stuff into those smiling boxes and for the people who deliver them; for the planes and trucks and vans that carry them; for the warehouses that store them; for the software ensuring that everything really does get to your door on time, for air-conditioning and gas and cardboard and steel. Amazon—Prime in particular—has done a superlative job of making all those costs, all those moving parts, all those externalities invisible to the consumer.
  • The pandemic drove up demand for Amazon, and for labor: Last year, company profits shot up 70 percent, Bezos’s personal wealth grew by $70 billion, and 1,400 people a day joined the company’s workforce.
  • Amazon is so big that every sector of our economy has bent to respond to the new way of consuming that it invented. Prime isn’t just bad for Amazon’s workers—it’s bad for Target’s, and Walmart’s. It’s bad for the people behind the counter at your neighborhood hardware store and bookstore, if your neighborhood still has a hardware store and a bookstore. Amazon has accustomed shoppers to a pace and manner of buying that depends on a miracle of precision logistics even when it’s managed by one of the biggest companies on Earth. For the smaller guys, it’s downright impossible.
  • “Every decision we make is based upon the fact that Amazon can get these books cheaper and faster. The prevailing expectation is you can get anything online shipped for”— he scrunched his fingers into air quotes—“‘free,’ in one or two days. And there’s really only one company that can do that. They do that because they’re willing to push and exploit their workers.”
  • Just as abstaining from flying for moral reasons won’t stop sea-level rise, one person canceling Prime won’t do much of anything to a multinational corporation’s bottom line. “It’s statistically insignificant to Amazon. They’ll never feel it,” Caine told me. But, he said, “the small businesses in your neighborhood will absolutely feel the addition of a new customer. Individual choices do make a big difference to them.”
  • Whelan teaches a class at UW called Consuming Happiness, and she is fond of giving her students the adage that you can buy happiness—“if you spend your money in keeping with your values: spending prosocially, on experiences. Tons of research shows us this.”
Javier E

Peter Thiel Is Taking a Break From Democracy - The Atlantic - 0 views

  • Thiel’s unique role in the American political ecosystem. He is the techiest of tech evangelists, the purest distillation of Silicon Valley’s reigning ethos. As such, he has become the embodiment of a strain of thinking that is pronounced—and growing—among tech founders.
  • why does he want to cut off politicians
  • But the days when great men could achieve great things in government are gone, Thiel believes. He disdains what the federal apparatus has become: rule-bound, stifling of innovation, a “senile, central-left regime.”
  • ...95 more annotations...
  • Peter Thiel has lost interest in democracy.
  • Thiel has cultivated an image as a man of ideas, an intellectual who studied philosophy with René Girard and owns first editions of Leo Strauss in English and German. Trump quite obviously did not share these interests, or Thiel’s libertarian principles.
  • For years, Thiel had been saying that he generally favored the more pessimistic candidate in any presidential race because “if you’re too optimistic, it just shows you’re out of touch.” He scorned the rote optimism of politicians who, echoing Ronald Reagan, portrayed America as a shining city on a hill. Trump’s America, by contrast, was a broken landscape, under siege.
  • Thiel is not against government in principle, his friend Auren Hoffman (who is no relation to Reid) says. “The ’30s, ’40s, and ’50s—which had massive, crazy amounts of power—he admires because it was effective. We built the Hoover Dam. We did the Manhattan Project,” Hoffman told me. “We started the space program.”
  • Their failure to make the world conform to his vision has soured him on the entire enterprise—to the point where he no longer thinks it matters very much who wins the next election.
  • His libertarian critique of American government has curdled into an almost nihilistic impulse to demolish it.
  • “Voting for Trump was like a not very articulate scream for help,” Thiel told me. He fantasized that Trump’s election would somehow force a national reckoning. He believed somebody needed to tear things down—slash regulations, crush the administrative state—before the country could rebuild.
  • He admits now that it was a bad bet.
  • “There are a lot of things I got wrong,” he said. “It was crazier than I thought. It was more dangerous than I thought. They couldn’t get the most basic pieces of the government to work. So that was—I think that part was maybe worse than even my low expectations.”
  • eid Hoffman, who has known Thiel since college, long ago noticed a pattern in his old friend’s way of thinking. Time after time, Thiel would espouse grandiose, utopian hopes that failed to materialize, leaving him “kind of furious or angry” about the world’s unwillingness to bend to whatever vision was possessing him at the moment
  • Thiel. He is worth between $4 billion and $9 billion. He lives with his husband and two children in a glass palace in Bel Air that has nine bedrooms and a 90-foot infinity pool. He is a titan of Silicon Valley and a conservative kingmaker.
  • “Peter tends to be not ‘glass is half empty’ but ‘glass is fully empty,’” Hoffman told me.
  • he tells the story of his life as a series of disheartening setbacks.
  • He met Mark Zuckerberg, liked what he heard, and became Facebook’s first outside investor. Half a million dollars bought him 10 percent of the company, most of which he cashed out for about $1 billion in 2012.
  • Thiel made some poor investments, losing enormous sums by going long on the stock market in 2008, when it nose-dived, and then shorting the market in 2009, when it rallied
  • on the whole, he has done exceptionally well. Alex Karp, his Palantir co-founder, who agrees with Thiel on very little other than business, calls him “the world’s best venture investor.”
  • Thiel told me this is indeed his ambition, and he hinted that he may have achieved it.
  • He longs for radical new technologies and scientific advances on a scale most of us can hardly imagine
  • He longs for a world in which great men are free to work their will on society, unconstrained by government or regulation or “redistributionist economics” that would impinge on their wealth and power—or any obligation, really, to the rest of humanity
  • Did his dream of eternal life trace to The Lord of the Rings?
  • He takes for granted that this kind of progress will redound to the benefit of society at large.
  • More than anything, he longs to live forever.
  • Calling death a law of nature is, in his view, just an excuse for giving up. “It’s something we are told that demotivates us from trying harder,”
  • Thiel grew up reading a great deal of science fiction and fantasy—Heinlein, Asimov, Clarke. But especially Tolkien; he has said that he read the Lord of the Rings trilogy at least 10 times. Tolkien’s influence on his worldview is obvious: Middle-earth is an arena of struggle for ultimate power, largely without government, where extraordinary individuals rise to fulfill their destinies. Also, there are immortal elves who live apart from men in a magical sheltered valley.
  • But his dreams have always been much, much bigger than that.
  • Yes, Thiel said, perking up. “There are all these ways where trying to live unnaturally long goes haywire” in Tolkien’s works. But you also have the elves.
  • How are the elves different from the humans in Tolkien? And they’re basically—I think the main difference is just, they’re humans that don’t die.”
  • During college, he co-founded The Stanford Review, gleefully throwing bombs at identity politics and the university’s diversity-minded reform of the curriculum. He co-wrote The Diversity Myth in 1995, a treatise against what he recently called the “craziness and silliness and stupidity and wickedness” of the left.
  • Thiel laid out a plan, for himself and others, “to find an escape from politics in all its forms.” He wanted to create new spaces for personal freedom that governments could not reach
  • But something changed for Thiel in 2009
  • he people, he concluded, could not be trusted with important decisions. “I no longer believe that freedom and democracy are compatible,” he wrote.
  • ven more notable one followed: “Since 1920, the vast increase in welfare beneficiaries and the extension of the franchise to women—two constituencies that are notoriously tough for libertarians—have rendered the notion of ‘capitalist democracy’ into an oxymoron.”
  • By 2015, six years after declaring his intent to change the world from the private sector, Thiel began having second thoughts. He cut off funding for the Seasteading Institute—years of talk had yielded no practical progress–and turned to other forms of escape
  • The fate of our world may depend on the effort of a single person who builds or propagates the machinery of freedom,” he wrote. His manifesto has since become legendary in Silicon Valley, where his worldview is shared by other powerful men (and men hoping to be Peter Thiel).
  • Thiel’s investment in cryptocurrencies, like his founding vision at PayPal, aimed to foster a new kind of money “free from all government control and dilution
  • His decision to rescue Elon Musk’s struggling SpaceX in 2008—with a $20 million infusion that kept the company alive after three botched rocket launches—came with aspirations to promote space as an open frontier with “limitless possibility for escape from world politics
  • It was seasteading that became Thiel’s great philanthropic cause in the late aughts and early 2010s. The idea was to create autonomous microstates on platforms in international waters.
  • “There’s zero chance Peter Thiel would live on Sealand,” he said, noting that Thiel likes his comforts too much. (Thiel has mansions around the world and a private jet. Seal performed at his 2017 wedding, at the Belvedere Museum in Vienna.)
  • As he built his companies and grew rich, he began pouring money into political causes and candidates—libertarian groups such as the Endorse Liberty super PAC, in addition to a wide range of conservative Republicans, including Senators Orrin Hatch and Ted Cruz
  • Sam Altman, the former venture capitalist and now CEO of OpenAI, revealed in 2016 that in the event of global catastrophe, he and Thiel planned to wait it out in Thiel’s New Zealand hideaway.
  • When I asked Thiel about that scenario, he seemed embarrassed and deflected the question. He did not remember the arrangement as Altman did, he said. “Even framing it that way, though, makes it sound so ridiculous,” he told me. “If there is a real end of the world, there is no place to go.”
  • You’d have eco farming. You’d turn the deserts into arable land. There were sort of all these incredible things that people thought would happen in the ’50s and ’60s and they would sort of transform the world.”
  • None of that came to pass. Even science fiction turned hopeless—nowadays, you get nothing but dystopias
  • He hungered for advances in the world of atoms, not the world of bits.
  • Founders Fund, the venture-capital firm he established in 200
  • The fund, therefore, would invest in smart people solving hard problems “that really have the potential to change the world.”
  • This was not what Thiel wanted to be doing with his time. Bodegas and dog food were making him money, apparently, but he had set out to invest in transformational technology that would advance the state of human civilization.
  • He told me that he no longer dwells on democracy’s flaws, because he believes we Americans don’t have one. “We are not a democracy; we’re a republic,” he said. “We’re not even a republic; we’re a constitutional republic.”
  • “It was harder than it looked,” Thiel said. “I’m not actually involved in enough companies that are growing a lot, that are taking our civilization to the next level.”
  • Founders Fund has holdings in artificial intelligence, biotech, space exploration, and other cutting-edge fields. What bothers Thiel is that his companies are not taking enough big swings at big problems, or that they are striking out.
  • In at least 20 hours of logged face-to-face meetings with Buma, Thiel reported on what he believed to be a Chinese effort to take over a large venture-capital firm, discussed Russian involvement in Silicon Valley, and suggested that Jeffrey Epstein—a man he had met several times—was an Israeli intelligence operative. (Thiel told me he thinks Epstein “was probably entangled with Israeli military intelligence” but was more involved with “the U.S. deep state.”)
  • Buma, according to a source who has seen his reports, once asked Thiel why some of the extremely rich seemed so open to contacts with foreign governments. “And he said that they’re bored,” this source said. “‘They’re bored.’ And I actually believe it. I think it’s that simple. I think they’re just bored billionaires.”
  • he has a sculpture that resembles a three-dimensional game board. Ascent: Above the Nation State Board Game Display Prototype is the New Zealander artist Simon Denny’s attempt to map Thiel’s ideological universe. The board features a landscape in the aesthetic of Dungeons & Dragons, thick with monsters and knights and castles. The monsters include an ogre labeled “Monetary Policy.” Near the center is a hero figure, recognizable as Thiel. He tilts against a lion and a dragon, holding a shield and longbow. The lion is labeled “Fair Elections.” The dragon is labeled “Democracy.” The Thiel figure is trying to kill them.
  • When I asked Thiel to explain his views on democracy, he dodged the question. “I always wonder whether people like you … use the word democracy when you like the results people have and use the word populism when you don’t like the results,” he told me. “If I’m characterized as more pro-populist than the elitist Atlantic is, then, in that sense, I’m more pro-democratic.”
  • “I couldn’t find them,” he said. “I couldn’t get enough of them to work.
  • He said he has no wish to change the American form of government, and then amended himself: “Or, you know, I don’t think it’s realistic for it to be radically changed.” Which is not at all the same thing.
  • When I asked what he thinks of Yarvin’s autocratic agenda, Thiel offered objections that sounded not so much principled as practical.
  • “I don’t think it’s going to work. I think it will look like Xi in China or Putin in Russia,” Thiel said, meaning a malign dictatorship. “It ultimately I don’t think will even be accelerationist on the science and technology side, to say nothing of what it will do for individual rights, civil liberties, things of that sort.”
  • Still, Thiel considers Yarvin an “interesting and powerful” historian
  • he always talks about is the New Deal and FDR in the 1930s and 1940s,” Thiel said. “And the heterodox take is that it was sort of a light form of fascism in the United States.”
  • Yarvin, Thiel said, argues that “you should embrace this sort of light form of fascism, and we should have a president who’s like FDR again.”
  • Did Thiel agree with Yarvin’s vision of fascism as a desirable governing model? Again, he dodged the question.
  • “That’s not a realistic political program,” he said, refusing to be drawn any further.
  • ooking back on Trump’s years in office, Thiel walked a careful line.
  • A number of things were said and done that Thiel did not approve of. Mistakes were made. But Thiel was not going to refashion himself a Never Trumper in retrospect.
  • “I have to somehow give the exact right answer, where it’s like, ‘Yeah, I’m somewhat disenchanted,’” he told me. “But throwing him totally under the bus? That’s like, you know—I’ll get yelled at by Mr. Trump. And if I don’t throw him under the bus, that’s—but—somehow, I have to get the tone exactly right.”
  • Thiel knew, because he had read some of my previous work, that I think Trump’s gravest offense against the republic was his attempt to overthrow the election. I asked how he thought about it.
  • “Look, I don’t think the election was stolen,” he said. But then he tried to turn the discussion to past elections that might have been wrongly decided. Bush-Gore in 2000, for instanc
  • He came back to Trump’s attempt to prevent the transfer of power. “I’ll agree with you that it was not helpful,” he said.
  • there is another piece of the story, which Thiel reluctantly agreed to discuss
  • Puck reported that Democratic operatives had been digging for dirt on Thiel since before the 2022 midterm elections, conducting opposition research into his personal life with the express purpose of driving him out of politic
  • Among other things, the operatives are said to have interviewed a young model named Jeff Thomas, who told them he was having an affair with Thiel, and encouraged Thomas to talk to Ryan Grim, a reporter for The Intercept. Grim did not publish a story during election season, as the opposition researchers hoped he would, but he wrote about Thiel’s affair in March, after Thomas died by suicide.
  • He deplored the dirt-digging operation, telling me in an email that “the nihilism afflicting American politics is even deeper than I knew.”
  • He also seemed bewildered by the passions he arouses on the left. “I don’t think they should hate me this much,”
  • he spoke at the closed-press event with a lot less nuance than he had in our interviews. His after-dinner remarks were full of easy applause lines and in-jokes mocking the left. Universities had become intellectual wastelands, obsessed with a meaningless quest for diversity, he told the crowd. The humanities writ large are “transparently ridiculous,” said the onetime philosophy major, and “there’s no real science going on” in the sciences, which have devolved into “the enforcement of very curious dogmas.”
  • “Diversity—it’s not enough to just hire the extras from the space-cantina scene in Star Wars,” he said, prompting laughter.
  • Nor did Thiel say what genuine diversity would mean. The quest for it, he said, is “very evil and it’s very silly.”
  • “the silliness is distracting us from very important things,” such as the threat to U.S. interests posed by the Chinese Communist Party.
  • “Whenever someone says ‘DEI,’” he exhorted the crowd, “just think ‘CCP.’”
  • Somebody asked, in the Q&A portion of the evening, whether Thiel thought the woke left was deliberately advancing Chinese Communist interests
  • “It’s always the difference between an agent and asset,” he said. “And an agent is someone who is working for the enemy in full mens rea. An asset is a useful idiot. So even if you ask the question ‘Is Bill Gates China’s top agent, or top asset, in the U.S.?’”—here the crowd started roaring—“does it really make a difference?”
  • About 10 years ago, Thiel told me, a fellow venture capitalist called to broach the question. Vinod Khosla, a co-founder of Sun Microsystems, had made the Giving Pledge a couple of years before. Would Thiel be willing to talk with Gates about doing the same?
  • Thiel feels that giving his billions away would be too much like admitting he had done something wrong to acquire them
  • He also lacked sympathy for the impulse to spread resources from the privileged to those in need. When I mentioned the terrible poverty and inequality around the world, he said, “I think there are enough people working on that.”
  • besides, a different cause moves him far more.
  • Should Thiel happen to die one day, best efforts notwithstanding, his arrangements with Alcor provide that a cryonics team will be standing by.
  • Then his body will be cooled to –196 degrees Celsius, the temperature of liquid nitrogen. After slipping into a double-walled, vacuum-insulated metal coffin, alongside (so far) 222 other corpsicles, “the patient is now protected from deterioration for theoretically thousands of years,” Alcor literature explains.
  • All that will be left for Thiel to do, entombed in this vault, is await the emergence of some future society that has the wherewithal and inclination to revive him. And then make his way in a world in which his skills and education and fabulous wealth may be worth nothing at all.
  • I wondered how much Thiel had thought through the implications for society of extreme longevity. The population would grow exponentially. Resources would not. Where would everyone live? What would they do for work? What would they eat and drink? Or—let’s face it—would a thousand-year life span be limited to men and women of extreme wealth?
  • “Well, I maybe self-serve,” he said, perhaps understating the point, “but I worry more about stagnation than about inequality.”
  • Thiel is not alone among his Silicon Valley peers in his obsession with immortality. Oracle’s Larry Ellison has described mortality as “incomprehensible.” Google’s Sergey Brin aspires to “cure death.” Dmitry Itskov, a leading tech entrepreneur in Russia, has said he hopes to live to 10,000.
  • . “I should be investing way more money into this stuff,” he told me. “I should be spending way more time on this.”
  • You haven’t told your husband? Wouldn’t you want him to sign up alongside you?“I mean, I will think about that,” he said, sounding rattled. “I will think—I have not thought about that.”
  • No matter how fervent his desire, Thiel’s extraordinary resources still can’t buy him the kind of “super-duper medical treatments” that would let him slip the grasp of death. It is, perhaps, his ultimate disappointment.
  • There are all these things I can’t do with my money,” Thiel said.
Javier E

How Nations Are Losing a Global Race to Tackle A.I.'s Harms - The New York Times - 0 views

  • When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology.
  • E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc.
  • Then came ChatGPT.
  • ...45 more annotations...
  • The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage.
  • Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law.
  • Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence.
  • Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works.
  • The absence of rules has left a vacuum. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced A.I. systems
  • At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace
  • That gap has been compounded by an A.I. knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits.
  • Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers.
  • The European Union has plowed ahead with its new law, the A.I. Act, despite disputes over how to handle the makers of the latest A.I. systems.
  • The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research.
  • A final agreement, expected as soon as Wednesday, could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work. But even if it passes, it is not expected to take effect for at least 18 months — a lifetime in A.I. development — and how it will be enforced is unclear.
  • Many companies, preferring nonbinding codes of conduct that provide latitude to speed up development, are lobbying to soften proposed regulations and pitting governments against one another.
  • “No one, not even the creators of these systems, know what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of Britain, who presided over an A.I. Safety Summit last month with 28 countries. “The urgency comes from there being a real question of whether governments are equipped to deal with and mitigate the risks.”
  • Europe takes the lead
  • In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. E.U. officials had selected them to provide advice about the technology, which was drawing attention for powering driverless cars and facial recognition systems.
  • as they discussed A.I.’s possible effects — including the threat of facial recognition technology to people’s privacy — they recognized “there were all these legal gaps, and what happens if people don’t follow those guidelines?”
  • In 2019, the group published a 52-page report with 33 recommendations, including more oversight of A.I. tools that could harm individuals and society.
  • By October, the governments of France, Germany and Italy, the three largest E.U. economies, had come out against strict regulation of general purpose A.I. models for fear of hindering their domestic tech start-ups. Others in the European Parliament said the law would be toothless without addressing the technology. Divisions over the use of facial recognition technology also persisted.
  • So when the A.I. Act was unveiled in 2021, it concentrated on “high risk” uses of the technology, including in law enforcement, school admissions and hiring. It largely avoided regulating the A.I. models that powered them unless listed as dangerous
  • “They sent me a draft, and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who advised the European Commission. “Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.”
  • E.U. leaders were undeterred.“Europe may not have been the leader in the last wave of digitalization, but it has it all to lead the next one,” Ms. Vestager said when she introduced the policy at a news conference in Brussels.
  • In 2020, European policymakers decided that the best approach was to focus on how A.I. was used and not the underlying technology. A.I. was not inherently good or bad, they said — it depended on how it was applied.
  • Nineteen months later, ChatGPT arrived.
  • The Washington game
  • Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.
  • “We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, OpenAI’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”
  • Tech companies have seized their advantage. In the first half of the year, many of Microsoft’s and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss A.I. legislation, according to lobbying disclosures. OpenAI registered its first three lobbyists and a tech lobbying group unveiled a $25 million campaign to promote A.I.’s benefits this year.
  • In that same period, Mr. Altman met with more than 100 members of Congress, including former Speaker Kevin McCarthy, Republican of California, and the Senate leader, Chuck Schumer, Democrat of New York. After testifying in Congress in May, Mr. Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Mr. Sunak and Prime Minister Narendra Modi of India.
  • , the White House announced that the four companies had agreed to voluntary commitments on A.I. safety, including testing their systems through third-party overseers — which most of the companies were already doing.
  • “It was brilliant,” Mr. Smith said. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do and we’ll push you to do more.’”
  • In a statement, Ms. Raimondo said the federal government would keep working with companies so “America continues to lead the world in responsible A.I. innovation.”
  • Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued welcoming tech executives.
  • In September, Mr. Schumer was the host of Elon Musk, Mark Zuckerberg of Meta, Sundar Pichai of Google, Satya Nadella of Microsoft and Mr. Altman at a closed-door meeting with lawmakers in Washington to discuss A.I. rules. Mr. Musk warned of A.I.’s “civilizational” risks, while Mr. Altman proclaimed that A.I. could solve global problems such as poverty.
  • A.I. companies are playing governments off one another. In Europe, industry groups have warned that regulations could put the European Union behind the United States. In Washington, tech companies have cautioned that China might pull ahead.
  • In May, Ms. Vestager, Ms. Raimondo and Antony J. Blinken, the U.S. secretary of state, met in Lulea, Sweden, to discuss cooperating on digital policy.
  • “China is way better at this stuff than you imagine,” Mr. Clark of Anthropic told members of Congress in January.
  • After two days of talks, Ms. Vestager announced that Europe and the United States would release a shared code of conduct for safeguarding A.I. “within weeks.” She messaged colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we can’t afford to lose.”
  • Months later, no shared code of conduct had appeared. The United States instead announced A.I. guidelines of its own.
  • Little progress has been made internationally on A.I. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for the borderless technology.
  • Yet “weak regulation in another country will affect you,” said Rajeev Chandrasekhar, India’s technology minister, noting that a lack of rules around American social media companies led to a wave of global disinformation.
  • “Most of the countries impacted by those technologies were never at the table when policies were set,” he said. “A.I will be several factors more difficult to manage.”
  • Even among allies, the issue has been divisive. At the meeting in Sweden between E.U. and U.S. officials, Mr. Blinken criticized Europe for moving forward with A.I. regulations that could harm American companies, one attendee said. Thierry Breton, a European commissioner, shot back that the United States could not dictate European policy, the person said.
  • Some policymakers said they hoped for progress at an A.I. safety summit that Britain held last month at Bletchley Park, where the mathematician Alan Turing helped crack the Enigma code used by the Nazis. The gathering featured Vice President Kamala Harris; Wu Zhaohui, China’s vice minister of science and technology; Mr. Musk; and others.
  • The upshot was a 12-paragraph statement describing A.I.’s “transformative” potential and “catastrophic” risk of misuse. Attendees agreed to meet again next year.
  • The talks, in the end, produced a deal to keep talking.
Javier E

The Sad Trombone Debate: The RNC Throws in the Towel and Gets Ready to Roll Over for Trump. Again. - 0 views

  • Death to the Internet
  • Yesterday Ben Thompson published a remarkable essay in which he more or less makes the case that the internet is a socially deleterious invention, that it will necessarily get more toxic, and that the best we can hope for is that it gets so bad, so fast, that everyone is shocked into turning away from it.
  • Ben writes the best and most insightful newsletter about technology and he has been, in all the years I’ve read him, a techno-optimist.
  • ...24 more annotations...
  • this is like if Russell Moore came out and said that, on the whole, Christianity turns out to be a bad thing. It’s that big of a deal.
  • Thompson’s case centers around constraints and supply, particularly as they apply to content creation.
  • In the pre-internet days, creating and distributing content was relatively expensive, which placed content publishers—be they newspapers, or TV stations, or movie studios—high on the value chain.
  • The internet reduced distribution costs to zero and this shifted value away from publishers and over to aggregators: Suddenly it was more important to aggregate an audience—a la Google and Facebook—than to be a content creator.
  • Hellscape
  • What has alarmed Thompson is that AI has now reduced the cost of creating content to zero.
  • what does the world look like when both the creation and distribution of content are zero?
  • We’re headed to a place where content is artificially created and distributed in such a way as to be tailored to a given user’s preferences. Which will be the equivalent of living in a hall of mirrors.
  • At the other end of the spectrum, independent journalists should be okay. A lone reporter running a focused Substack who only needs four digits’ worth of subscribers to sustain them.
  • What does that mean for news? Nothing good.
  • It doesn’t really make sense to talk about “news media” because there are fundamental differences between publication models that are driven by scale.
  • So the challenges the New York Times face will be different than the challenges that NPR or your local paper face.
  • Two big takeaways:
  • (1) Ad-supported publications will not survive
  • Zero-cost for content creation combined with zero-cost distribution means an infinite supply of content. The more content you have, the more ad space exists—the lower ad prices go.
  • Actually, some ad-supported publications will survive. They just won’t be news. What will survive will be content mills that exist to serve ads specifically matched to targeted audiences.
  • (2) Size is determinative.
  • The New York Times has a moat by dint of its size. It will see the utility of its soft “news” sections decline in value, because AI is going to be better at creating cooking and style content than breaking hard news. But still, the NYT will be okay because it has pivoted hard into being a subscription-based service over the last decade.
  • Audiences were valuable; content was commoditized.
  • But everything in between? That’s a crapshoot.
  • Technology writers sometimes talk about the contrast between “builders” and “conservers” — roughly speaking, between those who are most animated by what we stand to gain from technology and those animated by what we stand to lose.
  • in our moment the builder and conserver types are proving quite mercurial. On issues ranging from Big Tech to medicine, human enhancement to technologies of governance, the politics of technology are in upheaval.
  • Dispositions are supposed to be basically fixed. So who would have thought that deep blue cities that yesterday were hotbeds of vaccine skepticism would today become pioneers of vaccine passports? Or that outlets that yesterday reported on science and tech developments in reverent tones would today make it their mission to unmask “tech bros”?
  • One way to understand this churn is that the builder and the conserver types each speak to real, contrasting features within human nature. Another way is that these types each pick out real, contrasting features of technology. Focusing strictly on one set of features or the other eventually becomes unstable, forcing the other back into view.
Javier E

AI could change the 2024 elections. We need ground rules. - The Washington Post - 0 views

  • New York Mayor Eric Adams doesn’t speak Spanish. But it sure sounds like he does.He’s been using artificial intelligence software to send prerecorded calls about city events to residents in Spanish, Mandarin Chinese, Urdu and Yiddish. The voice in the messages mimics the mayor but was generated with AI software from a company called ElevenLabs.
  • Experts have warned for years that AI will change our democracy by distorting reality. That future is already here. AI is being used to fabricate voices, fundraising emails and “deepfake” images of events that never occurred.
  • I’m writing this to urge elected officials, candidates and their supporters to pledge not to use AI to deceive voters. I’m not suggesting a ban, but rather calling for politicians to commit to some common values while our democracy adjusts to a world with AI.
  • ...20 more annotations...
  • If we don’t draw some lines now, legions of citizens could be manipulated, disenfranchised or lose faith in the whole system — opening doors to foreign adversaries who want to do the same. AI might break us in 2024.
  • “The ability of AI to interfere with our elections, to spread misinformation that’s extremely believable is one of the things that’s preoccupying us,” Schumer said, after watching me so easily create a deepfake of him. “Lots of people in the Congress are examining this.”
  • Of course, fibbing politicians are nothing new, but examples keep multiplying of how AI supercharges misinformation in ways we haven’t seen before. Two examples: The presidential campaign of Florida Gov. Ron DeSantis (R) shared an AI-generated image of former president Donald Trump embracing Anthony S. Fauci. That hug never happened. In Chicago’s mayoral primary, someone used AI to clone the voice of candidate Paul Vallas in a fake news report, making it look like he approved of police brutality.
  • But what will happen when a shocking image or audio clip goes viral in a battleground state shortly before an election? What kind of chaos will ensue when someone uses a bot to send out individually tailored lies to millions of different voters?
  • A wide 85 percent of U.S. citizens said they were “very” or “somewhat” concerned about the spread of misleading AI video and audio, in an August survey by YouGov. And 78 percent were concerned about AI contributing to the spread of political propaganda.
  • We can’t put the genie back in the bottle. AI is already embedded in tech tool campaigns that all of us use every day. AI creates our Facebook feeds and picks what ads we see. AI built into our phone cameras brightens faces and smooths skin.
  • What’s more, there are many political uses for AI that are unobjectionable, and even empowering for candidates with fewer resources. Politicians can use AI to manage the grunt work of sorting through databases and responding to constituents. Republican presidential candidate Asa Hutchinson has an AI chatbot trained to answer questions like him. (I’m not sure politician bots are very helpful, but fine, give it a try.)
  • Clarke’s solution, included in a bill she introduced on political ads: Candidates should disclose when they use AI to create communications. You know the “I approve this message” notice? Now add, “I used AI to make this message.”
  • But labels aren’t enough. If AI disclosures become commonplace, we may become blind to them, like so much other fine print.
  • The bigger ask: We want candidates and their supporting parties and committees not to use AI to deceive us.
  • So what’s the difference between a dangerous deepfake and an AI facetune that makes an octogenarian candidate look a little less octogenarian?
  • “The core definition is showing a candidate doing or saying something they didn’t do or say,”
  • Sure, give Biden or Trump a facetune, or even show them shaking hands with Abraham Lincoln. But don’t use AI to show your competitor hugging an enemy or fake their voice commenting on current issues.
  • The pledge also includes not using AI to suppress voting, such as using an authoritative voice or image to tell people a polling place has been closed. That is already illegal in many states, but it’s still concerning how believable AI might make these efforts seem.
  • Don’t deepfake yourself. Making yourself or your favorite candidate appear more knowledgeable, experienced or culturally capable is also a form of deception.
  • (Pressed on the ethics of his use of AI, Adams just proved my point that we desperately need some ground rules. “These are part of the broader conversations that the philosophical people will have to sit down and figure out, ‘Is this ethically right or wrong?’ I’ve got one thing: I’ve got to run the city,” he said.)
  • The golden rule in my pledge — don’t use AI to be materially deceptive — is similar to the one in an AI regulation proposed by a bipartisan group of lawmakers
  • Such proposals have faced resistance in Washington on First Amendment grounds. The free speech of politicians is important. It’s not against the law for politicians to lie, whether they’re using AI or not. An effort to get the Federal Election Commission to count AI deepfakes as “fraudulent misrepresentation” under its existing authority has faced similar pushback.
  • But a pledge like the one I outline here isn’t a law restraining speech. It’s asking politicians to take a principled stand on their own use of AI
  • Schumer said he thinks my pledge is just a start of what’s needed. “Maybe most candidates will make that pledge. But the ones that won’t will drive us to a lower common denominator, and that’s true throughout AI,” he said. “If we don’t have government-imposed guardrails, the lowest common denominator will prevail.”
Javier E

India takes strong pro-Israel stance under Modi in a departure from the past | India | The Guardian - 0 views

  • ust a few hours after Hamas launched its assault on Israel, India’s prime minister was among the first world leaders to respond. In a strongly worded statement, Narendra Modi condemned the “terrorist attacks” and said India “stands in solidarity with Israel at this difficult hour”.
  • it was not a sentiment restricted only to the upper echelons of Indian government. As Azad Essa, a journalist and author of Hostile Homelands: The New Alliance Between India and Israel, said: “This messaging gave a clear signal to the whole rightwing internet cell in India.”
  • In the aftermath, the Indian internet factcheckers AltNews and Boom began to observe a flood of disinformation targeting Palestine pushed out by Indian social media accounts, which included fake stories about atrocities committed by Palestinians and Hamas that were shared sometimes millions of times, and often using the conflict to push the same Islamophobic narrative that has been used regularly to demonise India’s Muslim population since the BJP came to power
  • ...7 more annotations...
  • BJP-associated Facebook groups also began to push the message that Hamas represented the same Muslim threat facing India in the troubled, majority-Muslim region of Kashmir and Palestinians were sweepingly branded as jihadis.
  • A turning point came in 1999 when India went to war with Pakistan and Israel proved willing to provide arms and ammunition. It was the beginning of a defence relationship that has grown exponentially. India buys about $2bn-worth of arms from Israel every year – its largest arms supplier after Russia – and accounts for 46% of Israel’s overall weapons exports.
  • it was the election of Modi that marked a fundamental sea change. While previous governments had kept their dealings with Israel largely quiet, due to concerns of alienating foreign allies and its own vast Muslim population, Modi’s Hindu nationalist BJP government had very different priorities.
  • ssa said: “The narrative they were pushing was clear: that India and Israel are these ancient civilisations that had been derailed by outsiders – which means Muslims – and their leaders have come together, like long-lost brothers, to fulfil their destiny.”
  • The ideological alignment between the two leaders was certainly more apparent than in the past. The BJP’s ideological forefathers, and its rank and file today, have long regarded Israel as a model for the religious nationalist state, referred to as the Hindu Rashtra, that the Hindu rightwing in India hope to establish.
  • While Modi was also the first Indian prime minister to visit Ramallah in Palestine, much of the focus of his government has been on strengthening ties with Israel, be it through defence, culture, agriculture and even film-making. This year, Gautam Adani, the Indian billionaire businessman seen to be close to Modi, paid $1.2bn to acquire the strategic Israeli port of Haifa.
  • Modi’s foreign policy has also overseen a transformation in ties with Arab Gulf countries including Saudi Arabia, the United Arab Emirates and Qatar, which has been of great financial benefit to India and laid the foundation for a groundbreaking India-Middle East economic trade corridor, running all the way to Europe, which was announced at the G20 forum for international economic cooperation this year but has yet to be built.
Javier E

Luiz Barroso, Who Supercharged Google's Reach, Dies at 59 - The New York Times - 0 views

  • When Google arrived in the late 1990s, hundreds of thousands of people were instantly captivated by its knack for taking them wherever they wanted to go on the internet. Developed by the company’s founders Larry Page and Sergey Brin, the algorithm that drove the site seemed to work like magic.
  • as the internet search engine expanded its reach to billions of people over the next decade, it was driven by another major technological advance that was less discussed, though no less important: the redesign of Google’s giant computer data centers.
  • Led by a Brazilian named Luiz Barroso, a small team of engineers rebuilt the warehouse-size centers so that they behaved like a single machine — a technological shift that would change the way the entire internet was built, allowing any site to reach billions of people almost instantly and much more consistently.
  • ...8 more annotations...
  • Before the rise of Google, internet companies stuffed their data centers with increasingly powerful and expensive computer servers, as they struggled to reach more and more people. Each server delivered the website to a relatively small group of people. And if the server died, those people were out of luck.
  • Dr. Barroso realized that the best way to distribute a wildly popular website like Google was to break it into tiny pieces and spread them evenly across an array of servers. Rather than each server delivering the site to a small group of people, the entire data center delivered the site to its entire audience.
  • “In other words, we must treat the data center itself as one massive warehouse-scale computer.”
  • Widespread outages became a rarity, especially as Dr. Barroso and his team expanded these ideas across multiple data centers. Eventually, Google’s entire global network of data centers behaved as a single machine.
  • By the mid-1990s, he was working as a researcher in a San Francisco Bay Area lab operated by the Digital Equipment Corporation, one of the computer giants of the day.
  • There, he helped create multi-core computer chips — microprocessors made of many chips working in tandem. A more efficient way of running computer software, such chips are now a vital part of almost any new computer.
  • At first, Dr. Barroso worked on software. But as Dr. Hölzle realized that Google would also need to build its own hardware, he tapped Dr. Barroso to lead the effort. Over the next decade, as it pursued his warehouse-size computer, Google built its own servers, data storage equipment and networking hardware.
  • For years, this work was among Google’s most closely guarded secrets. The company saw it as a competitive advantage. But by the 2010s, companies like Amazon and Facebook were following the example set by Dr. Barroso and his team. Soon, the world’s leading computer makers were building and selling the same kind of low-cost hardware, allowing any internet company to build an online empire the way Google had.
Javier E

Fun is dead. - The Washington Post - 0 views

  • Sometime in recent history, possibly around 2004, Americans forgot to have fun, true fun, as though they’d misplaced it like a sock.
  • Instead, fun evolved into work, sometimes more than true work, which is where we find ourselves now.
  • Fun is often emphatic, exhausting, scheduled, pigeonholed, hyped, forced and performative
  • ...25 more annotations...
  • Things that were long big fun now overwhelm, exhaust and annoy. The holiday season is an extended exercise in excess and loud, often sleazy sweaters.
  • Which means it is nothing of the sort. This is the drag equivalent of fun and suggests that fun is done.
  • Adults assiduously record themselves appearing to have something masquerading as “fun,” a fusillade of Coachellic micro social aggressions unleashed on multiple social media platforms. Look at me having so much FUN!
  • Vacations are overscheduled with too many activities, FOMO on steroids, a paradox of choice-inducing decision fatigue, so much so that people return home exhausted and in need of another one.
  • Weddings have morphed into multistage stress extravaganzas while doubling as express paths to insolvency: destination proposals for the whole family, destination bachelorette and bachelor blowouts, destination weddings in remote barns with limited lodging, something called a “buddymoon” (bring the gang!) and planners to help facilitate the same custom cocktailsness of it all.
  • What could be a greater cause for joy or more natural than having a baby? Apparently, not much these days. Impending parenthood is overthought and over-apped, incorporating more savings-draining events that didn’t exist a few decades ago: babymoons and lethal, fire-inducing, gender-reveal gatherings and baby showers so over-the-top as to shame weddings.
  • Retirements must be purposeful. Also, occasions for an acute identity crisis. You need to have a plan, a mission, a coach, a packed color-coded grid of daily activities in a culture where our jobs are our identities, our worth tied to employment.
  • “I feel like I should be having more fun than I’m actually having,” says Alyssa Alvarez, a social media marketing manager and DJ in Detroit, expressing a sentiment that many share. “There are expectations of what I want people to believe that my life is like rather than what my life is actually like.”
  • “The world is so much less about human connection,” says Amanda Richards, 34, who works in casting in Los Angeles and is a graduate of Cudworth’s course. “We do more things virtually. People are more isolated. And there’s all this toxic positivity to convince people of how happy you are.”
  • For eons, early adulthood was considered an age of peak fun. Now, according to several studies, it’s a protracted state of anxiety and depression.
  • Because there is now a coach for everything, Alvarez hired the “party coach” Evan Cudworth, taking his $497 course this fall on how to pursue “intentional fun.” (It now costs $555.)
  • Blame it on an American culture that values work, productivity, power, wealth, status and more work over leisure
  • Blame it on technological advances that tether us to work without cessation
  • Blame it on the pandemic, which exacerbated so much while delivering Zoomageddon.
  • Blame it on 2004, with the advent of Facebook, which led to Twitter (okay, X), Instagram, Threads, TikTok and who-knows-what lurking in the ether.
  • Blame it again on 2004 and the introduction of FOMO, our dread of missing out, broadcast through multiple social media spigots
  • “So many people are retreating into their phones, into anxiety,” says Cudworth, 37, from Chicago. “I’m helping people rediscover what fun means to them.”
  • His mandate is redefining fun: cutting back on bingeing screen time, eradicating envy scrolling, getting outside, moving, dancing. “With technology, we don’t allow ourselves to be present. You’re always thinking ‘something is better around the corner,’” Cudworth says, the now squandered in pursuit of the future.
  • Instead of this being the most wonderful time of the year, we battle holiday fatigue, relentless beseeching for our money and, if Fox News is to be believed, a war on Christmas that is nearing its third decade.
  • ow do Americans spend their leisure hours when they might be having fun with others, making those vital in-person connections? Watching television, our favorite free time and “sports activity” (yes, that’s how it’s classified), according to the Bureau of Labor Statistics, an average of 2.8 hours daily.
  • “That’s way more television than you really need. We put play on the back burner,” says Pat Rumbaugh, 65, of Takoma Park, Md. She’s “The Play Lady,” who organizes unorganized play for adults
  • Catherine Price, the author of “The Power of Fun: How to Feel Alive Again,” believes “we’re totally misdoing leisure” and “not leaving any room for spontaneity.”
  • To Price, True Fun is the confluence of connection (other people, nature), playfulness (lightheartedness, freedom) and flow (being fully engaged, present), which is not as challenging as it sounds. “You can have fun in any context. Playfulness is about an attitude,”
  • Back in the day, co-workers were friends. (Sometimes, more.) After hours, they gathered for drinks, played softball. Today, because of email, Slack and remote work, offices are half empty and far quieter than libraries.
  • “We go to work and there’s no sense of connection and camaraderie,” says Davis, who was long employed by his city’s department of parks and recreation. “People feel emotionally disconnected. Healthy conversations are the precursor of fun. We’ve lost the art of communication. Our spirit comes home with us. If you don’t communicate at work, what are you coming home with?”
lilyrashkind

Faith leaders lead community in grieving after Uvalde shooting - 0 views

  • On Tuesday, a gunman entered the elementary school and killed 21 people — 19 of them students — in Uvalde, Texas. Two weeks before in Buffalo, a gunman shot and killed 10 people — most of whom were Black — in a racist massacre.
  • “It’s very hard for people to even talk about their grief right now,” said Thomson. “When we don’t know what to do, we come together as a community.”
  • The Rev. Mark Tyler of Mother Bethel A.M.E. Church shared with his congregation on Sunday that people are “getting sick” of watching people continue to die in mass shootings while nothing is done to change the status quo. According to Tyler, healing is found when feelings are shared and heard.
  • ...3 more annotations...
  • “A grieving process allows us to heal. When we deny that process, that’s when the numbness sets in. Then beyond that we start feeling symptoms of anxiety, and beyond that — depression,” said Whaley-Perkins. “So it’s really important for people who are vulnerable or have previous traumas that you don’t wait to see if it’s going to go away. Healing is extraordinarily important.”AdvertisementAccording to Whaley-Perkins, a community should be a group of people that provide safety, can be trusted, and where one can be vulnerable with their feelings. For many in Philadelphia, where they practice their faith is also where communities resides.
  • “Unless we change fundamentally how we educate our society, unfortunately people will still find a way to do these things,” said Shemtov. “We are all different — but we were all created by God with a purpose. Everybody has to start where they can start. If you’re not in the position to make national or local change, we can all change how we treat ou
  • As the country reckons with how to move forward, interfaith leaders in Philadelphia look to balance healing with collective action. To Chad Dion Lassiter, who is a national race relations expert and executive director of Pennsylvania Human Relations Commission, taking care of oneself, of one’s community, and finding the motivation to take action are made possible by taking the healing process seriously.
« First ‹ Previous 681 - 700 of 719 Next ›
Showing 20 items per page