Skip to main content

Home/ History Readings/ Group items tagged draft

Rss Feed Group items tagged

Javier E

AI is already writing books, websites and online recipes - The Washington Post - 0 views

  • Experts say those books are likely just the tip of a fast-growing iceberg of AI-written content spreading across the web as new language software allows anyone to rapidly generate reams of prose on almost any topic. From product reviews to recipes to blog posts and press releases, human authorship of online material is on track to become the exception rather than the norm.
  • Semrush, a leading digital marketing firm, recently surveyed its customers about their use of automated tools. Of the 894 who responded, 761 said they’ve at least experimented with some form of generative AI to produce online content, while 370 said they now use it to help generate most if not all of their new content, according to Semrush Chief Strategy Officer Eugene Levin.
  • What that may mean for consumers is more hyper-specific and personalized articles — but also more misinformation and more manipulation, about politics, products they may want to buy and much more.
  • ...32 more annotations...
  • As AI writes more and more of what we read, vast, unvetted pools of online data may not be grounded in reality, warns Margaret Mitchell, chief ethics scientist at the AI start-up Hugging Face
  • “The main issue is losing track of what truth is,” she said. “Without grounding, the system can make stuff up. And if it’s that same made-up thing all over the world, how do you trace it back to what reality is?”
  • a raft of online publishers have been using automated writing tools based on ChatGPT’s predecessors, GPT-2 and GPT-3, for years. That experience shows that a world in which AI creations mingle freely and sometimes imperceptibly with human work isn’t speculative; it’s flourishing in plain sight on Amazon product pages and in Google search results.
  • “If you have a connection to the internet, you have consumed AI-generated content,” said Jonathan Greenglass, a New York-based tech investor focused on e-commerce. “It’s already here.
  • “In the last two years, we’ve seen this go from being a novelty to being pretty much an essential part of the workflow,”
  • the news credibility rating company NewsGuard identified 49 news websites across seven languages that appeared to be mostly or entirely AI-generated.
  • The sites sport names like Biz Breaking News, Market News Reports, and bestbudgetUSA.com; some employ fake author profiles and publish hundreds of articles a day, the company said. Some of the news stories are fabricated, but many are simply AI-crafted summaries of real stories trending on other outlets.
  • Ingenio, the San Francisco-based online publisher behind sites such as horoscope.com and astrology.com, is among those embracing automated content. While its flagship horoscopes are still human-written, the company has used OpenAI’s GPT language models to launch new sites such as sunsigns.com, which focuses on celebrities’ birth signs, and dreamdiary.com, which interprets highly specific dreams.
  • Ingenio used to pay humans to write birth sign articles on a handful of highly searched celebrities like Michael Jordan and Ariana Grande, said Josh Jaffe, president of its media division. But delegating the writing to AI allows sunsigns.com to cheaply crank out countless articles on not-exactly-A-listers
  • In the past, Jaffe said, “We published a celebrity profile a month. Now we can do 10,000 a month.”
  • It isn’t just text. Google users have recently posted examples of the search engine surfacing AI-generated images. For instance, a search for the American artist Edward Hopper turned up an AI image in the style of Hopper, rather than his actual art, as the first result.
  • Jaffe said he isn’t particularly worried that AI content will overwhelm the web. “It takes time for this content to rank well” on Google, he said — meaning that it appears on the first page of search results for a given query, which is critical to attracting readers. And it works best when it appears on established websites that already have a sizable audience: “Just publishing this content doesn’t mean you have a viable business.”
  • Google clarified in February that it allows AI-generated content in search results, as long as the AI isn’t being used to manipulate a site’s search rankings. The company said its algorithms focus on “the quality of content, rather than how content is produced.”
  • Reputations are at risk if the use of AI backfires. CNET, a popular tech news site, took flack in January when fellow tech site Futurism reported that CNET had been using AI to create articles or add to existing ones without clear disclosures. CNET subsequently investigated and found that many of its 77 AI-drafted stories contained errors.
  • But CNET’s parent company, Red Ventures, is forging ahead with plans for more AI-generated content, which has also been spotted on Bankrate.com, its popular hub for financial advice. Meanwhile, CNET in March laid off a number of employees, a move it said was unrelated to its growing use of AI.
  • BuzzFeed, which pioneered a media model built around reaching readers directly on social platforms like Facebook, announced in January it planned to make “AI inspired content” part of its “core business,” such as using AI to craft quizzes that tailor themselves to each reader. BuzzFeed announced last month that it is laying off 15 percent of its staff and shutting down its news division, BuzzFeed News.
  • it’s finding traction in the murkier worlds of online clickbait and affiliate marketing, where success is less about reputation and more about gaming the big tech platforms’ algorithms.
  • That business is driven by a simple equation: how much it costs to create an article vs. how much revenue it can bring in. The main goal is to attract as many clicks as possible, then serve the readers ads worth just fractions of a cent on each visit — the classic form of clickbait
  • In the past, such sites often outsourced their writing to businesses known as “content mills,” which harness freelancers to generate passable copy for minimal pay. Now, some are bypassing content mills and opting for AI instead.
  • “Previously it would cost you, let’s say, $250 to write a decent review of five grills,” Semrush’s Levin said. “Now it can all be done by AI, so the cost went down from $250 to $10.”
  • The problem, Levin said, is that the wide availability of tools like ChatGPT means more people are producing similarly cheap content, and they’re all competing for the same slots in Google search results or Amazon’s on-site product reviews
  • So they all have to crank out more and more article pages, each tuned to rank highly for specific search queries, in hopes that a fraction will break through. The result is a deluge of AI-written websites, many of which are never seen by human eyes.
  • Jaffe said his company discloses its use of AI to readers, and he promoted the strategy at a recent conference for the publishing industry. “There’s nothing to be ashamed of,” he said. “We’re actually doing people a favor by leveraging generative AI tools” to create niche content that wouldn’t exist otherwise.
  • The rise of AI is already hurting the business of Textbroker, a leading content platform based in Germany and Las Vegas, said Jochen Mebus, the company’s chief revenue officer. While Textbroker prides itself on supplying credible, human-written copy on a huge range of topics, “People are trying automated content right now, and so that has slowed down our growth,”
  • Mebus said the company is prepared to lose some clients who are just looking to make a “fast dollar” on generic AI-written content. But it’s hoping to retain those who want the assurance of a human touch, while it also trains some of its writers to become more productive by employing AI tools themselves.
  • He said a recent survey of the company’s customers found that 30 to 40 percent still want exclusively “manual” content, while a similar-size chunk is looking for content that might be AI-generated but human-edited to check for tone, errors and plagiarism.
  • Levin said Semrush’s clients have also generally found that AI is better used as a writing assistant than a sole author. “We’ve seen people who even try to fully automate the content creation process,” he said. “I don’t think they’ve had really good results with that. At this stage, you need to have a human in the loop.”
  • For Cowell, whose book title appears to have inspired an AI-written copycat, the experience has dampened his enthusiasm for writing.“My concern is less that I’m losing sales to fake books, and more that this low-quality, low-priced, low-effort writing is going to have a chilling effect on humans considering writing niche technical books in the future,”
  • It doesn’t help, he added, knowing that “any text I write will inevitably be fed into an AI system that will generate even more competition.”
  • Amazon removed the impostor book, along with numerous others by the same publisher, after The Post contacted the company for comment.
  • AI-written books aren’t against Amazon’s rules, per se, and some authors have been open about using ChatGPT to write books sold on the site.
  • “Amazon is constantly evaluating emerging technologies and innovating to provide a trustworthy shopping experience for our customers,”
Javier E

Chartbook #165: Polycrisis - thinking on the tightrope. - 0 views

  • in April 2022 the Cascade Institute published an interesting report on the theme by Scott Janzwood and Thomas Homer-Dixon. They defined a polycrisis as follows:
  • We define a global polycrisis as any combination of three or more interacting systemic risks with the potential to cause a cascading, runaway failure of Earth’s natural and social systems that irreversibly and catastrophically degrades humanity’s prospects.
  • A global polycrisis, should it occur, will inherit the four core properties of systemic risks—extreme complexity, high nonlinearity, transboundary causality, and deep uncertainty—while also exhibiting causal synchronization among risks.
  • ...30 more annotations...
  • A systemic risk is a threat emerging within one natural, technological, or social system with impacts extending beyond that system to endanger the functionality of one or more other systems
  • “Polycrisis is a way of capturing the tangled mix of challenges and changes closely interact with one another, bending, blurring and amplifying each other.”
  • The FT essay was a short piece - originally drafted to run to only 750 words. In that short compass I focused on three aspects
  • (1) Defining the concept of polycrisis in simple and intuitive terms;
  • (2) Stressing the diversity of causal factors implied by the term “poly”;
  • (3) and emphasizing the novelty of our current situation.
  • There are two aspects to the novelty that I stress in the FT piece, one is our inability to understand our current situation as the result of a single, specific causal factor and secondly the extraordinary scale and breadth of global development, especially in the last 50 years, that makes it seem probable, according to the cognitive schemata and models that we do have at our disposal, that we are about to crash through critical tipping points.
  • Do we actually know what development or growth are?
  • As Bruno Latour forced us to recognize, it is not at all obvious that we do understand our own situation. In fact, as he convincingly argued in We Have Never Been Modern, modernity’s account of itself is built around blindspots specifically with regard to the hybrid mobilization of material resources and actors and the working of science itself, which define the grand developmental narrative.
  • t we have every reason to think that we are at a dramatic threshold point, but also that our need to reach for a term as unspecific as polycrisis indicates our flailing inability to grasp our situation with the confidence and conceptual clarity that we might once have hoped for.
  • What Beck taught us was that risk is no longer in any simple sense “natural” but a phenomenon of second nature.
  • A Beckian reading of polycrisis might look a bit like the version produced by Christopher Hobson and Matthew Davies summarized
  • A polycrisis can be thought of as having the following properties:(1) Multiple, separate crises happening simultaneously. This is the most immediate and comprehensible feature.
  • (2) Feedback loops, in which individual crises interact in both foreseeable and unexpected ways. This points to the ways that these separate crises relate to each other.
  • (3) Amplification, whereby these interactions cause crises to magnify or accelerate, generating a sense of lack of control. The way these separate problems relate and connect works to exacerbate and deepen the different crises.
  • (4) Unboundedness, in which each crisis ceases to be clearly demarcated, both in time and space, as different problems bleed over and merge. It becomes increasingly difficult to distinguish where one issue ends, and another commences.
  • (5) Layering, a dynamic Tooze attributes to Yixin’s analysis, whereby the concerns of interest groups related to each distinct crisis overlap ‘to create layered social problems: current problems with historical problems, tangible interest problems with ideological problems, political problems with non-political problems; all intersecting and interfering with one another’ (quoted in Tooze 2021, 18).
  • (6) The breakdown of shared meaning, stemming from crises being understood differently and from the complex ways in which they interact, and how these interactions are subsequently perceived differently. As each crisis blurs and connects to the other, it becomes more difficult to identify a clear scope and narrative for each distinct crisis, as well as coming to terms with all the interactions between different issues.
  • (8) Emergent properties, the collection of these dynamics, which all exhibit a high degree of reflexivity, exceeds the sum total of its parts. The polycrisis is ultimately much more than a collection of smaller, separate crises. Instead, it is something like a socio-political version of the ‘Fujiwhara effect,’ a term used to describe when two or more cyclones come together, morph and merge.
  • (7) Cross purposes, whereby each individual crisis might impede the resolution of another crisis, in terms of demanding attention and resources, and the extent to which they have become tangled together makes it difficult to distinguish and prioritise.
  • We need to think “big”. Or rather we need to learn how to span the void between the very big and the very particular, the micro and the macro
  • What all this talk of grand social processes and movements of the mind should not obscure is the extent to which the current crisis is also a matter of identity, choice and action. As much as it is a matter of sociology, social theory and grand historical sweep, it is also a matter of psychology, both at the group and very intimate level, and of politics.
  • The issue of politics must however be flagged.
  • The polycrisis affects us at every level. And if you want to take seriously the problem of thinking in medias res you cannot bracket the matter of psychology.
  • The tension of the current moment is not, after all, simply the result of long-term processes of development, or environmental change. It is massively exacerbated by geopolitical tension resulting from strategic decisions taken by state elites. Some of those are elected. Some not.
  • What is characteristic of the current moment, and symptomatic of the polycrisis, is that the decisive actors in Russia, China and the United States, the three greatest military powers, are all defining their positions as though their very identities were on the line.
  • Can one really say that the Biden administration, the Chinese, Putin’s regime are crisis-fighting? Are they not escalating?
  • It is surely a matter of both, and in interdependence. Each of the major powers will insist that they are acting defensively (crisis-fighting in the extended sense). But what this entails, if you feel fundamental interests are at stake, is escalation, even to the point of engaging in open warfare or risking atomic confrontation.
  • It is like the classic Cold War but only worse, because everyone feels under truly existential pressure and has a sense of the clock ticking. If no one confidently believes that they have time on their side - and who has that luxury in the age of polycrisis? - it makes for a very dangerous situation indeed.
  • I found the idea of polycrisis interesting and timely because the prefix “poly” directed attention to the diversity of challenges without specifying a single dominant contradiction or source of tension or dysfunction.
Javier E

Ultra-Orthodox Israelis Are Joining the Army - WSJ - 0 views

  • Soon after the May 1948 birth of the state of Israel, a meeting took place between David Ben-Gurion, Israel’s first prime minister, and Rabbi Avraham Yeshayahu Karelitz, a leading religious figure and head of Israel’s ultra-Orthodox (in Hebrew, Haredi) community. The result was the Status Quo Agreement, which charted two parallel lines: one for Jewish Israelis at large, whether secular or religious, the other tailored to the needs of Haredi Jews in particular.
  • Over the decades, the former “line” helped Jewish Israelis flourish in a modern state. The Haredi line restored the fortunes of a special religious world that, after being nearly destroyed in the Holocaust, was re-established. That world was upgraded with such institutions as Torah academies, synagogues, and Hasidic courts; in various subsects and religious activities; and in whole Haredi municipalities.
  • According to their political leaders, most Haredim hope to sustain their religiously devout and socially reclusive lives permanently under the protection of their longstanding civic exemptions. The rest of Israel demands and expects full participation.
  • ...6 more annotations...
  • given the growth of Haredi society, from 3% of Israel’s population in 1948 to almost 14% today, profound challenges have arisen. Part of Israel’s recent social unrest is the product of tension between the Haredi and non-Haredi public over the military draft
  • Within two weeks, some 3,000 Haredi men had asked to join Israel’s armed forces.
  • In light of these developments, it is tempting to imagine that Israel has turned a corner and things will never be the same. People made similar predictions during the pandemic, and most of them weren’t realized. We need to ensure that this time, things won’t simply bounce back to where they wer
  • Israel’s calamity has sparked several awakenings. It’s obvious now that Prime Minister Benjamin Netanyahu, however talented he may be, isn’t the Jewish messiah
  • We have seen the face of the true enemy and reabsorbed the ancient lesson that there is no negotiating with evil. It must be destroyed.
  • We have discovered that the international left—at least when it comes to Israel—will largely support its favored “underdog” along with its unquenchable thirst for Jewish blood.
Javier E

How OnlyFans top earner Bryce Adams makes millions selling a sex fantasy - Washington Post - 0 views

  • In the American creator economy, no platform is quite as direct or effective as OnlyFans. Since launching in 2016, the subscription site known primarily for its explicit videos has become one of the most methodical, cash-rich and least known layers of the online-influencer industry, touching every social platform and, for some creators, unlocking a once-unimaginable level of wealth.
  • More than 3 million creators now post around the world on OnlyFans, which has 230 million subscribing “fans” — a global audience two-thirds the size of the United States itself
  • fans’ total payouts to creators soared last year to $5.5 billion — more than every online influencer in the United States earned from advertisers that year,
  • ...55 more annotations...
  • If OnlyFans’s creator earnings were taken as a whole, the company would rank around No. 90 on Forbes’s list of the biggest private companies in America by revenue, ahead of Twitter (now called X), Neiman Marcus Group, New Balance, Hard Rock International and Hallmark Cards.
  • Many creators now operate like independent media companies, with support staffs, growth strategies and promotional budgets, and work to apply the cold quantification and data analytics of online marketing to the creation of a fantasy life.
  • The subscription site has often been laughed off as a tabloid punchline, a bawdy corner of the internet where young, underpaid women (teachers, nurses, cops) sell nude photos, get found out and lose their jobs.
  • pressures to perform for a global audience; an internet that never forgets. “There is simply no room for naivety,” one said in a guide posted to Reddit’s r/CreatorsAdvice.
  • America’s social media giants for years have held up online virality as the ultimate goal, doling out measurements of followers, reactions and hearts with an unspoken promise: that internet love can translate into sponsorships and endorsement deals
  • But OnlyFans represents the creator economy at its most blatantly transactional — a place where viewers pay upfront for creators’ labor, and intimacy is just another unit of content to monetize.
  • The fast ascent of OnlyFans further spotlights how the internet has helped foster a new style of modern gig work that creators see as safe, remote and self-directed,
  • Creators’ nonchalance about the digital sex trade has fueled a broader debate about whether the site’s promotion of feminist autonomy is a facade: just a new class of techno-capitalism, selling the same patriarchal dream.
  • But OnlyFans increasingly has become the model for how a new generation of online creators gets paid. Influencers popular on mainstream sites use it to capitalize on the audiences they’ve spent years building. And OnlyFans creators have turned going viral on the big social networks into a marketing strategy, using Facebook, Twitter and TikTok as sales funnels for getting new viewers to subscribe.
  • many creators, she added, still find it uniquely alluring — a rational choice in an often-irrational environment for gender, work and power. “Why would I spend my day doing dirty, degrading, minimum-wage labor when I can do something that brings more money in and that I have a lot more control over?”
  • it is targeting major “growth regions” in Latin America, Europe and Australia. (The Mexican diver Diego Balleza said he is using his $15-a-month account to save up for next year’s Paris Olympics.)
  • “Does an accountant always enjoy their work? No. All work has pleasure and pain, and a lot of it is boring and annoying. Does that mean they’re being exploited?”
  • Adams’s operation is registered in state business records as a limited liability company and offers quarterly employee performance reviews and catered lunch. It also runs with factory-like efficiency, thanks largely to a system designed in-house to track millions of data points on customers and content and ensure every video is rigorously planned and optimized.
  • Since sending her first photo in 2021, Adams’s OnlyFans accounts have earned $16.5 million in sales, more than 1.4 million fans and more than 11 million “likes.” She now makes about $30,000 a day — more than most American small businesses — from subscriptions, video sales, messages and tips, half of which is pure profit
  • Adams’s team sees its business as one of harmless, destigmatized gratification, in which both sides get what they want. The buyers are swiped over in dating apps, widowed, divorced or bored, eager to pay for the illusion of intimacy with an otherwise unattainable match. And the sellers see themselves as not all that different from the influencers they watched growing up on YouTube, charging for parts of their lives they’d otherwise share for free.
  • “This is normal for my generation, you know?
  • “I can go on TikTok right now and see ten girls wearing the bare minimum of clothing just to get people to join their page. Why not go the extra step to make money off it?”
  • the job can be financially precarious and mentally taxing, demanding not just the technical labor of recording, editing, managing and marketing but also the physical and emotional labor of adopting a persona to keep clients feeling special and eager to spend.
  • enix International Limited, is based, the company said its sales grew from $238 million in 2019 to more than $5.5 billion last year.
  • Its international army of creators has also grown from 348,000 in 2019 to more than 3 million today — a tenfold increase.
  • The company paid its owner, the Ukrainian American venture capitalist Leonid Radvinsky, $338 million in dividends last year.)
  • portion of its creator base and 70 percent of its annual revenue
  • When Tim Stokely, a London-based operator of live-cam sex sites, founded OnlyFans with his brother in 2016, he framed it as a simple way to monetize the creators who were becoming the world’s new celebrities — the same online influencers, just with a payment button. In 2019, Stokely told Wired magazine that his site was like “a bolt-on to your existing social media,” in the same way “Uber is a bolt-on to your car.”
  • Before OnlyFans, pornography on the internet had been largely a top-down enterprise, with agents, producers, studios and other middlemen hoarding the profits of performers’ work. OnlyFans democratized that business model, letting the workers run the show: recording their own content, deciding their prices, selling it however they’d like and reaping the full reward.
  • The platform bans real-world prostitution, as well as extreme or illegal content, and requires everyone who shows up on camera to verify they’re 18 or older by sending in a video selfie showing them holding a government-issued ID.
  • OnlyFans operates as a neutral marketplace, with no ads, trending topics or recommendation algorithms, placing few limitations on what creators can sell but also making it necessary for them to market themselves or fade away.
  • After sending other creators’ agents their money over PayPal, Adams’s ad workers send suggestions over the messaging app Telegram on how Bryce should be marketed, depending on the clientele. OnlyFans models whose fans tend to prefer the “girlfriend experience,” for instance, are told to talk up her authenticity: “Bryce is a real, fit girl who wants to get to know you
  • Like most platforms, OnlyFans suffers from a problem of incredible pay inequality, with the bulk of the profits concentrated in the bank accounts of the lucky few.
  • the top 1 percent of accounts made 33 percent of the money, and that most accounts took home less than $145 a month
  • Watching their partner have sex with someone else sometimes sparked what they called “classic little jealousy issues,” which Adams said they resolved with “more communication, more growing up.” The money was just too good. And over time, they adopted a self-affirming ideology that framed everything as just business. Things that were tough to do but got easier with practice, like shooting a sex scene, they called, in gym terms, “reps.” Things one may not want to do at first, but require some mental work to approach, became “self-limiting beliefs.”
  • They started hiring workers through friends and family, and what was once just Adams became a team effort, in which everyone was expected to workshop caption and video ideas. The group evaluated content under what Brian, who is 31, called a “triangulation method” that factored their comfort level with a piece of content alongside its engagement potential and “brand match.” Bryce the person gave way to Bryce the brand, a commercialized persona drafted by committee and refined for maximum marketability.
  • One of the operation’s most subtly critical components is a piece of software known as “the Tool,” which they developed and maintain in-house. The Tool scrapes and compiles every “like” and view on all of Adams’s social network accounts, every OnlyFans “fan action” and transaction, and every text, sext and chat message — more than 20 million lines of text so far.
  • It houses reams of customer data and a library of preset messages that Adams and her chatters can send to fans, helping to automate their reactions and flirtations — “an 80 percent template for a personalized response,” she said.
  • And it’s linked to a searchable database, in which hundreds of sex scenes are described in detail — by price, total sales, participants and general theme — and given a unique “stock keeping unit,” or SKU, much like the scannable codes on a grocery store shelf. If a fan says they like a certain sexual scenario, a team member can instantly surface any relevant scenes for an easy upsell. “Classic inventory chain,” Adams said.
  • The systemized database is especially handy for the young women of Adams’s chat team, known as the “girlfriends,” who work at a bench of laptops in the gym’s upper loft. The Tool helped “supercharge her messaging, which ended up, like, 3X-ing her output,” Brian said, meaning it tripled.
  • Keeping men talking is especially important because the chat window is where Adams’s team sends out their mass-message sales promotions, and the girlfriends never really know what to expect. One girlfriend said she’s had as many as four different sexting sessions going at once.
  • Adams employs a small team that helps her pay other OnlyFans creators to give away codes fans can use for free short-term trials. The team tracks redemption rates and promotional effectiveness in a voluminous spreadsheet, looking for guys who double up on discount codes, known as “stackers,” as well as bad bets and outright fraud.
  • Many OnlyFans creators don’t offer anything explicit, and the site has pushed to spotlight its stable of chefs, comedians and mountain bikers on a streaming channel, OFTV. But erotic content on the platform is inescapable; even some outwardly conventional creators shed their clothes behind the paywall
  • Creators with a more hardcore fan base, meanwhile, are told to cut to the chase: “300+ sex tapes & counting”; “Bryce doesn’t say no, she’s the most wild, authentic girl you will ever find.”
  • The $18 an hour she makes on the ad team, however, is increasingly dwarfed by the money Leigh makes from her personal OnlyFans account, where she sells sex scenes with her boyfriend for $10 a month. Leigh made $92,000 in gross sales in July, thanks largely to revenue from new fans who found her through Adams or the bikini videos Leigh posts to her 170,000-follower TikTok account
  • “This is a real job. You dedicate your time to it every single day. You’re always learning, you’re always doing new things,” she said. “I’d never thought I’d be good at business, but learning all these business tactics really empowers you. I have my own LLC; I don’t know any other 20-year-old right now that has their own LLC.”
  • The team is meeting all traffic goals, per their internal dashboard, which showed that through the day on a recent Thursday they’d gained 2,221,835 video plays, 19,707 landing-page clicks, 6,372 new OnlyFans subscribers and 9,024 new social-network followers. And to keep in shape, Adams and her boyfriend are abiding by a rigorous daily diet and workout plan
  • They eat the same Chick-fil-A salad at every lunch, track every calorie and pay a gym assistant to record data on every rep and weight of their exercise.
  • But the OnlyFans business is competitive, and it does not always feel to the couple like they’ve done enough. Their new personal challenge, they said, is to go viral on the other platforms as often as possible, largely through jokey TikTok clips and bikini videos that don’t give away too much.
  • the host told creators this sales-funnel technique was key to helping build the “cult of you”: “Someone’s fascination will become infatuation, which will make you a lot of money.”
  • Adams’s company has worked to reverse engineer the often-inscrutable art of virality, and Brian now estimates Adams makes about $5,000 in revenue for every million short-form video views she gets on TikTok.
  • Her team has begun ranking each platform by the amount of money they expect they can get from each viewer there, a metric they call “fan lifetime value.” (Subscribers who click through to her from Facebook tend to spend the most, the data show. Facebook declined to comment.)
  • The younger workers said they see the couple as mentors, and the two are constantly reminding them that the job of a creator is not a “lottery ticket” and requires a persistent grind. Whenever one complains about their lack of engagement, Brian said he responds, “When’s the last time you posted 60 different videos, 60 days in a row, on your Instagram Reels?”
  • But some have taken to it quite naturally. Rayna Rose, 19, was working last year at a hair salon, sweeping floors for $12 an hour, when an old high school classmate who worked with Adams asked whether she wanted to try OnlyFans and make $500 a video.
  • Rose started making videos and working as a chatter for $18 an hour but recently renegotiated her contract with Adams to focus more on her personal OnlyFans account, where she has nearly 30,000 fans, many of whom pay $10 a month.
  • One recent evening this summer, Adams was in the farm’s gym when her boyfriend told her he was headed to their guest room to record a collab with Rose, who was wearing a blue bikini top and braided pigtails.
  • “Go have fun,” Adams told them as they walked away. “Make good content.” The 15-minute video has so far sold more than 1,400 copies and accounted for more than $30,000 in sales.
  • Rose said she has lost friends due to her “lifestyle,” with one messaging her recently, “Can you imagine how successful you would be if you studied regularly and spent your time wisely?”
  • The message stung but, in Rose’s eyes, they didn’t understand her at all. She feels, for the first time, like she has a sense of purpose: She wants to be a full-time influencer. She expects to clear $200,000 in earnings this year and is now planning to move out of her parents’ house.
  • “I had no idea what I wanted to do with my life. And now I know,” she said. “I want to be big. I want to be, like, mainstream.”
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Opinion | How AI is transforming education at the University of Mississippi - The Washi... - 0 views

  • Perplexity AI “unlocks the power of knowledge with information discovery and sharing.” This, it turns out, means “does research.” Type something into it, and it spits out a comprehensive answer, always sourced and sometimes bulleted. You might say this is just Google on steroids — but really, it is Google with a bibliography.
  • Caleb Jackson, a 22-year-old junior at Ole Miss studying part time, is a fan. This way, he doesn’t have to spend hours between night shifts and online classes trawling the internet for sources. Perplexity can find them, and he can get to writing that much sooner.
  • What’s most important to Ole Miss faculty members is that students use these tools with integrity. If the university doesn’t have a campuswide AI honor code, and so far it doesn’t, individual classes should. And no matter whether professors permit all applications of AI, as some teachers have tried, or only the narrowest, students should have to disclose just how much help they had from robots.
  • ...25 more annotations...
  • “Write a five-paragraph essay on Virginia Woolf’s ‘To the Lighthouse.’” Too generic? Well, how about “Write a five-paragraph essay on the theme of loss in ‘To the Lighthouse’”? Too high-schoolish? “Add some bigger words, please.” The product might not be ready to turn in the moment it is born, fully formed, from ChatGPT’s head. But with enough tweaking — either by the student or by the machine at the student’s demand — chances are the output can muster at least a passing grade.
  • Which of these uses are okay? Which aren’t? The harnessing of an AI tool to create an annotated bibliography likely doesn’t rankle even librarians the way relying on that same tool to draft a reflection on Virginia Woolf offends the professor of the modern novel. Why? Because that kind of contemplation goes closer to the heart of what education is really about.
  • the core of the question colleges now face. They can’t really stop students from using AI in class. They might not be able to notice students have done so at all, and when they do think they’ve noticed they’ll be acting only on suspicion. But maybe teachers can control the ways in which students use AI in class.
  • Figuring out exactly what ways those ought to be requires educators to determine what they care about in essays — what they are desperate to hear. The purpose of these papers is for students to demonstrate what they’ve learned, from hard facts to compositional know-how, and for teachers to assess how their pupils are progressing. The answer to what teachers want to get from students in their written work depends on what they want to give to students.
  • ChatGPT is sort of in a class of its own, because it can be almost anything its users want it to be so long as they possess one essential skill: prompt engineering. This means, basically, manipulating the machine not only into giving you an answer but also into giving you the kind of answer you’re looking for.
  • The next concern is that students should use AI in a manner that improves not only their writing but also their thinking — in short, in a manner that enhances learning rather than bypasses the need to learn at all.
  • This simple principle makes for complicated practice. Certainly, no one is going to learn anything by letting AI write an essay in its entirety. What about letting AI brainstorm an idea, on the other hand, or write an outline, or gin up a counter-argument? Lyndsey Cook, a senior at Ole Miss planning a career in nursing, finds the brainstorming especially helpful: She’ll ask ChatGPT or another tool to identify the themes in a piece of literature, and then she’ll go back and look for them herself.
  • These shortcuts, on the one hand, might interfere with students’ learning to brainstorm, outline or see the other side of things on their own
  • But — here comes a human-generated counterargument — they may also aid students in surmounting obstacles in their composition that otherwise would have stopped them short. That’s particularly true of kids whose high schools didn’t send them to college already equipped with these capabilities.
  • Allow AI to boost you over these early hurdles, and suddenly the opportunity for deeper learning — the opportunity to really write — will open up. That’s how Caleb Jackson, the part-time student for whom Perplexity has been such a boon, sees it: His professor, he says , wanted them to “get away from the high-school paper and go further, to write something larger like a thesis.”
  • maybe, as one young Ole Miss faculty member put it to me, this risks “losing the value of the struggle.” That, she says, is what she is scared will go away.
  • All this invites the most important question there is: What is learning for?
  • Learning, in college, can be instrumental. According to this view, the aim of teaching is to prepare students to live in the real world, so all that really matters is whether they have the chops to field jobs that feed themselves and their families. Perhaps knowing how to use AI to do any given task for you, then, is one of the most valuable skills out there — the same way it pays to be quick with a calculator.
  • If you accept this line of argument, however, there are still drawbacks to robotic crutches. Some level of critical thinking is necessary to function as an adult, and if AI stymies its development even the instrumental aim of education is thwarted. The same goes for that “value of the struggle.” The real world is full of adversity, much of which the largest language model can’t tell you how to overcome.
  • more compelling is the idea, probably shared by most college professors, that learning isn’t only instrumental after all — that it has intrinsic value and that it is the end rather than merely a means to one.
  • Every step along the way that is skipped, the shorter the journey becomes, the less we will take in as we travel.
  • This glummest of outlooks suggests that AI will stunt personal growth even if it doesn’t harm professional prospects.
  • While that doesn’t mean it’s wise to prohibit every little application of the technology in class, it probably does mean discouraging those most closely related to critical thinking.
  • One approach is to alter standards for grading, so that the things the machines are worst at are also the things that earn the best marks: originality, say, or depth of feeling, or so-called metacognition — the process of thinking about one’s own thinking or one’s own learning.
  • Hopefully, these things are also the most valuable because they are what make us human.
  • Caleb Jackson only wants AI to help him write his papers — not to write them for him. “If ChatGPT will get you an A, and you yourself might get a C, it’s like, ‘Well, I earned that C.’” He pauses. “That might sound crazy.”
  • Dominic Tovar agrees. Let AI take charge of everything, and, “They’re not so much tools at that point. They’re just replacing you.”
  • Lyndsey Cook, too, believes that even if these systems could reliably find the answers to the most vexing research problems, “it would take away from research itself” — because scientific inquiry is valuable for its own sake. “To have AI say, ‘Hey, this is the answer …’” she trails off, sounding dispirited.
  • Claire Mischker, lecturer of composition and director of the Ole Miss graduate writing center, asked her students at the end of last semester to turn in short reflections on their experience in her class. She received submissions that she was near certain were produced by ChatGPT — “that,” she says as sarcastically as she does mournfully, “felt really good.
  • The central theme of the course was empathy.
lilyrashkind

Gibraltar mansion could be saved by Wilmington Delaware ownership plan - 0 views

  • Gibraltar Preservation Group – a limited liability company of which Drake Cattermole and David Carpenter are principals – has owned the 6-acre property since 2010. The two spent the following years amassing adjacent parcels to propose a financially viable rehabilitation project for the historic property.
  • During that time, the mansion sat vacant and deteriorated, an aspect that opponents of redeveloping the historic property have pushed to the forefront in their arguments.Local developer Robert Snowberger, of 9SDC – a Wilmington-based historic preservation contractor – introduced plans in February to turn the Gibraltar mansion into a boutique hotel, renovate the greenhouse and garage into restaurant and retail space, and build townhomes on vacant land surrounding the property.
  • “All would be subject to appropriate restrictive covenants to ensure against unwanted commercial uses,” he wrote. “The city will contract with 9SDC to act as developer of the site, most especially because they have been integral to negotiations with the owner entity, because they have been deeply involved with securing the vitally needed historic tax credits and because they have experience with restoration of historic properties.”
  • ...6 more annotations...
  • The mayor’s proposal, sent to residents in the Highlands community May 25 under Purzycki’s personal letterhead and address, attempts to strike a compromise to ensure the mansion is rehabilitated while reducing the project’s impact on the community.
  • “When the mayor says this should be taken off the owner’s hands, I applaud him for that,” said Michael Melloy, a Forty Acres resident who grew up in the Highlands. “But the city of Wilmington is in no position to manage a high-end historic mansion and garden.”
  • Melloy and others continue to press for the state to enforce the conservation easement – which requires the current owners to stabilize and secure the mansion at their expense – and consider taking ownership of the property as it has done with other historical sites in Delaware.
  • Purzycki told Delaware Online/The News Journal during a recent phone interview that the bond bill funding request would go toward rehabilitating the mansion, not the owners, per his latest proposal.Melloy argues in a draft letter that while the state funds wouldn't go directly to the owners, "the effect is the same: the owners’ financial responsibility is absolved and transferred to all Delaware taxpayers."The developer did not return calls requesting comment.
  • That pursuit would mean lawsuits and potentially years of red tape that would stall any progress in rehabbing Gibraltar, the mayor said.Melloy contends city departments have neither the historic preservation nor the horticultural experience to own and maintain Gibraltar and the accompanying gardens. The Wilmington resident points to a lack of code enforcement at the property over the years, and the recent condemnation of several buildings on North Adams Street as examples of the city’s failure to maintain properties in general.HISTORIC NEGLECT:Wilmington landlord of condemned apartments has long history of property neglect
  • As for the state or a nonprofit taking ownership of the historic estate, Purzycki said the state isn’t interested and noted that Preservation Delaware – a nonprofit – previously owned Gibraltar “and that didn’t work out, did it?”Got a tip? Contact Amanda Fries at afries@delawareonline.com, or by calling 302-598-5507. Follow her on Twitter at @mandy_fries.
Javier E

Did politics cut 'systemic' from AP African American studies plan? - Washington Post - 0 views

  • A politically charged adjective popped up repeatedly in the evolving plans for a new Advanced Placement course on African American studies. It was “systemic.”
  • The February 2022 version declared that students should learn how African American communities combat effects of “systemic marginalization.” An April update paired “systemic” with discrimination, oppression, inequality, disempowerment and racism. A December version said it was essential to know links between Black Panther activism and “systemic inequality that disproportionately affected African Americans.”
  • Then the word vanished. “Systemic,” a crucial term for many scholars and civil rights advocates, appears nowhere in the official version released Feb. 1. This late deletion and others reflect the extraordinary political friction that often shadows efforts in the nation’s schools to teach about history, culture and race.
  • ...12 more annotations...
  • a senior College Board official now acknowledges the organization was mindful of how “systemic” and certain other words in the modern lexicon of race in America would receive intense scrutiny in some places.
  • Jason Manoharan, vice president for AP program development. He said the College Board worried some phrases and concepts had been “co-opted for a variety of purposes” and were being used as “political instruments.” So the organization took a cautious approach to the final edits even as it sought to preserve robust content on historical and cultural impacts of slavery and racial discrimination.
  • “We wanted this course to be adopted by 50 states, and we wanted as many students and teachers as possible to be able to experience it,” Manoharan said. His acknowledgment underscored the inherent politics behind promoting a course that deals so squarely with race in America.
  • John K. Thornton, a professor of African American studies and history at Boston University, who contributed to the planning, said he was pleased the course opens with five weeks on early Africa. But he lamented that reparations and Black Lives Matter ended up only as optional research topics. “It did upset me a little bit,” he said. “Those things obviously feel very much a part of what a college course is about.”
  • DeSantis, a potential presidential candidate, has accused the course architects of promoting “a political agenda.” He also criticized an early course plan’s references to Black queer studies and “intersectionality,” a concept that helps explain overlapping forms of discrimination that affect Black women and others.
  • Teresa Reed, dean of music at the University of Louisville, said her work as one of 13 members of the AP African American studies committee resembled similar assignments she has undertaken for other AP courses. Reed supports the African American studies course plan and said it will continue to be revised as pilot teachers give feedback. She said she saw no evidence of political meddling in the course design. “That was absolutely not my experience,”
  • Two luminaries in the field, Henry Louis Gates Jr. and Evelyn Brooks Higginbotham, both of Harvard University and both of whom advised the College Board, also issued statements vouching for the course.
  • The first 81-page draft of the course plan, in February 2022, drew topics and sources from the syllabi of introductory classes at historically Black universities, Ivy League schools and other prominent institutions. The College Board said it was produced as a preview for 200 college professors at a March 2022 symposium. Faculty recommended cutting 20 percent to 25 percent of the proposed topics, the College Board said, and as much as half of suggested readings.
  • The April version, 299 pages, was the pilot course guide, a road map for teachers before classes began in the fall. It included much more detail on goals, essential knowledge and potential source material. It also made an important switch on contemporary issues: Certain lessons on reparations, incarceration and movements for Black lives became optional and would not be covered on the AP exam. At this stage, the guide included a week of instruction on Black feminism, womanism and intersectionality, and it used the word “systemic” nine times.
  • One of the most consequential decisions made last year was to set aside significant time — ultimately, three weeks — near the end of the course for a research paper of up to 1,500 words on a topic students would choose. The project will count for 20 percent of the AP score for those who seek college credit.
  • Among 40 sample topics in the official plan are Black Lives Matter; intersectionality; reparations debates; gay life and expression in Black communities; and Black conservatism.
  • College Board officials point to the development of an extensive digital library for the course — including a 1991 text on intersectionality from Crenshaw — as evidence that they are not censoring writers or voices. Crenshaw teachers, they say, use the course framework as a starting point to design their own syllabi of readings and assignments.
Javier E

Who Watches the Watchdog? The CJR's Russia Problem - Byline Times - 0 views

  • In December 2018, Pope commissioned me to report for the CJR on the troubled history of The Nation magazine and its apparent support for the policies of Vladimir Putin. 
  • My $6,000 commission to write for the prestigious ”watchdog” was flattering and exciting – but would also be a hard call. Watchdogs, appointed or self-proclaimed, can only claim entitlement when they hold themselves to the highest possible standards of reporting and conduct. It was not to be
  • For me, the project was vital but also a cause for personal sadness.  During the 1980s, I had been an editor of The Nation’s British sister magazine New Statesman and had served as chair of its publishing company. I knew, worked with and wrote for The Nation’s then-editor, the late Victor Navasky. He subsequently chaired the CJR. 
  • ...28 more annotations...
  • Investigating and calling out a magazine and editor for which I felt empathy, and had historic connections to, hearing from its critics and dissidents, and finding whistleblowers and confidential inside sources was a challenge. But hearing responses from all sides was a duty.
  • I worked on it for six months, settling a first draft of my story to the CJR‘s line editor in the summer 2019. From then on my experience of the CJR was devastating and damaging.
  • After delivering the story and working through a year-long series of edits and re-edits required by Pope, the story was slow-walked to dismissal. In 2022, after Russian tanks had rolled towards Kyiv, I urged Pope to restore and publish the report, given the new and compelling public interest. He refused.
  • he trigger for my CJR investigation was a hoax concerning Democratic Party emails hacked and dumped in 2016 by teams from Russia’s GRU intelligence agency.  The GRU officers responsible were identified and their methods described in detail in the 2019 Mueller Report.  
  • The Russians used the dumped emails decisively – first to leverage an attack on that year’s Democratic National Convention; and then to divert attention from Donald Trump’s gross indiscretions at critical times before his election
  • In 2017, with Trump in the White House, Russian and Republican denial operations began, challenging the Russian role and further widening divisions in America. A pinnacle of these operations was the publication in The Nation on 9 August 2017 of an article – still online under a new editor – claiming that the stolen emails were leaked from inside the DNC.  
  • Immediately after the article appeared, Trump-supporting media and his MAGA base were enthralled. They celebrated that a left-liberal magazine had refuted the alleged Russian operations in supporting Trump, and challenged the accuracy of mainstream press reporting on ‘Russiagate’
  • Nation staff and advisors were aghast to find their magazine praised lavishly by normally rabid outlets – Fox News, Breitbart, the Washington Times. Even the President’s son.
  • When I was shown the Nation article later that year by one of the experts it cited, I concluded that it was technical nonsense, based on nothing.  The White House felt differently and directed the CIA to follow up with the expert, former senior National Security Agency official and whistleblower, William Binney (although nothing happened)
  • Running the ‘leak’ article positioned the left-wing magazine strongly into serving streams of manufactured distractions pointing away from Russian support for Trump.
  • I traced the source of the leak claim to a group of mainly American young right-wing activists delivering heavy pro-Russian and pro-Syrian messaging, working with a British collaborator. Their leader, William Craddick, had boasted of creating the ‘Pizzagate’ conspiracy story – a fantasy that Hillary Clinton and her election staff ran a child sex and torture ring in the non-existent basement of a pleasant Washington neighbourhood pizzeria. Their enterprise had clear information channels from Moscow. 
  • We spoke for 31 minutes at 1.29 ET on 12 April 2019. During the conversation, concerning conflicts of interest, Pope asked only about my own issues – such as that former editor Victor Navasky, who would figure in the piece, had moved from running and owning The Nation to being Chair of the CJR board; and that the independent wealth foundation of The Nation editor Katrina vanden Heuvel – the Kat Foundation – periodically donated to Columbia University.
  • She and her late husband, Professor Stephen Cohen, were at the heart of my reporting on the support The Nation gave to Putin’s Russia. Sixteen months later, as Pope killed my report, he revealed that he had throughout been involved in an ambitious and lucratively funded partnership between the CJR and The Nation, and between himself and vanden Heuvel. 
  • On the day we spoke, I now know, Pope was working with vanden Heuvel and The Nation to launch – 18 days later – a major new international joint journalism project ‘Covering Climate Now!‘
  • Soon after we spoke, the CJR tweeted that “CJR and @thenation are gathering some of the world’s top journalists, scientists, and climate experts” for the event. I did not see the tweet. Pope and the CJR staff said nothing of this to me. 
  • Any editor must know without doubt in such a situation, that every journalist has a duty of candour and a clear duty to recuse themselves from editorial authority if any hint of conflict of interest arises. Pope did not take these steps. From then until August 2020, through his deputy, he sent me a stream of directions that had the effect of removing adverse material about vanden Heuvel and its replacement with lists of her ‘achievements’. Then he killed the story
  • Working on my own story for the CJR, I did not look behind or around – or think I needed to. I was working for the self-proclaimed ‘watchdog of journalism’. I forgot the ancient saw: who watches the watchdog?
  • This week, Kyle Pope failed to reply to questions from Byline Times about conflicts of interest in linking up with the subjects of the report he had commissioned.
  • During the period I was preparing the report about The Nation and its editor, he wrote for The Nation on nine occasions. He has admitted being remunerated by the publication. While I was working for the CJR, he said nothing. He did not recuse himself, and actively intervened to change content for a further 18 months.
  • On April 16 2019, I was informed that Katrina vanden Heuvel had written to Pope to ask about my report. “We’re going to say thanks for her thoughts and that we’ll make sure the piece is properly vetted and fact-checked,” I was told
  • A month later, I interviewed her for the CJR. Over the course of our 100 minutes discussion, it must have slipped her mind to mention that she and Kyle Pope had just jointly celebrated being given more than $1 million from the Rockefeller Family and other foundations to support their climate project.
  • Pope then asked me to identify my confidential sources from inside The Nation, describing this as a matter of “policy”
  • Pope asked several times that the article be amended to state that there were general tie-ups between the US left and Putin. I responded that I could find no evidence to suggest that was true, save that the Daily Beast had uncovered RT attempting cultivation of the US left. 
  • Pope then wanted the 6,000-word and fully edited report cut by 1,000 words, mainly to remove material about the errors in The Nation article. Among sections cut down were passages showing how, from 2014 onwards, vanden Heuvel had hired a series of pro-Russian correspondents after they had praised her husband. Among the new intake was a Russian and Syrian Government supporting broadcaster, Aaron Maté, taken on in 2017 after he had platformed Cohen on his show The Real News. 
  • On 30 January 2023, the CJR published an immense four-part 23,000-word series on Trump, Russia and the US media. The CJR‘s writers found their magazine praised lavishly by normally rabid outlets. Fox News rejoiced that The New York Times had been “skewered by the liberal media watchdog the Columbia Journalism Review” over Russiagate”. WorldNetDaily called it a “win for Trump”.
  • Pope agreed. Trump had “hailed our report as proof of the media assault on Trump that they’ve been hyping all along,” he wrote. “Trump cheered that view on Truth Social, his own, struggling social-media platform
  • In the series, writer Jeff Gerth condemns multiple Pulitzer Prize-winning reports on Russian interference operations by US mainstream newspapers. Echoing words used in 2020 by vanden Heuvel, he cited as more important “RealClearInvestigations, a non-profit online news site that has featured articles critical of the Russia coverage by writers of varying political orientation, including Aaron Maté”.
  • As with The Nation in 2017, the CJR is seeing a storm of derisive and critical evaluations of the series by senior American journalists. More assessments are said to be in the pipeline. “We’re taking the critiques seriously,” Pope said this week. The Columbia Journalism Review may now have a Russia Problem.  
Javier E

Ukraine War and U.S. Politics Complicate Climate Change Fight - The New York Times - 0 views

  • Energy experts said that Mr. Biden missed an opportunity to connect the war in Ukraine to the need to more swiftly sever an economic reliance on fossil fuels. “The president did not articulate the long-term opportunity for the U.S. to lead the world in breaking free of the geopolitical nightmare that is oil dependency,” said Paul Bledsoe, a strategic adviser to the Progressive Policy Institute, a Washington-based think tank.
  • In exposing the enormous leverage that Russia has enjoyed with its energy exports, the Ukraine conflict is forcing European leaders to make some urgent choices: Should it build new fossil fuel infrastructure so that it can replace Russian fuel with liquefied natural gas from elsewhere, chiefly the United States? Or should it shift away from fossil fuels faster?
  • A draft of the report, reviewed by The New York Times, suggests that the new strategy will propose speeding up energy efficiency measures and renewable energy installations. It views imports of liquefied natural gas, or L.N.G., from the United States and elsewhere as a short term measure to offset Russian piped gas.
  • ...8 more annotations...
  • Analysts have said European countries can quickly reduce gas dependence with energy efficiency measures and ramping up renewable energy investments, which are already in line with Europe’s ambition to stop pumping additional greenhouse gases into the atmosphere by midcentury
  • The conflict in Ukraine could fast-track some of that. It could also lead to what Lisa Fischer, who follows energy policy at E3G, a research group, called “a tectonic shift” — using renewables, rather than ample gas storage, to achieve energy security.
  • The President’s centerpiece legislative agenda, which he had called the Build Back Better act, is dead. Democrats still hope to pass approximately $500 billion of clean energy tax incentives that had been part of the package, but opportunities to do so are waning
  • The United States, for its part, has ramped up exports of L.N.G. to Europe to counter the decline in Russian piped gas. By the end of this year, the United States is poised to have the world’s largest L.N.G. export capacity.
  • White House officials said Mr. Biden wove climate change and clean energy throughout his speech. He noted that Ford and GM are investing billions of dollars to build electric vehicles, creating millions of manufacturing jobs in the United States. He also noted that funding from the infrastructure package will build a national network of 500,000 electric vehicle charging stations.
  • “Energy is a key weapon within this fight, and if there were far less dependency on gas there would be a different set of plays.”
  • If that investment does not come through and the Supreme Court also restricts the administration’s ability to regulate emission, Mr. Biden’s goal of cutting United States emissions roughly in half compared with 2005 levels could be essentially unattainable.
  • Even if climate wasn’t the stated focus of Mr. Biden’s Tuesday address, administration officials said that Russia’s war against Ukraine has not pushed climate change off the agenda. They noted that Mr. Biden has made climate change an emphasis in virtually every federal agency, and has moved ahead with major clean energy deployments including a record-breaking offshore wind auction last week that brought in more than $4 billion.
Javier E

AI Is Running Circles Around Robotics - The Atlantic - 0 views

  • Large language models are drafting screenplays and writing code and cracking jokes. Image generators, such as Midjourney and DALL-E 2, are winning art prizes and democratizing interior design and producing dangerously convincing fabrications. They feel like magic. Meanwhile, the world’s most advanced robots are still struggling to open different kinds of doors
  • the cognitive psychologist Steven Pinker offered a pithier formulation: “The main lesson of thirty-five years of AI research,” he wrote, “is that the hard problems are easy and the easy problems are hard.” This lesson is now known as “Moravec’s paradox.”
  • The paradox has grown only more apparent in the past few years: AI research races forward; robotics research stumbles. In part that’s because the two disciplines are not equally resourced. Fewer people work on robotics than on AI.
  • ...7 more annotations...
  • In theory, a robot could be trained on data drawn from computer-simulated movements, but there, too, you must make trade-offs
  • Jang compared computation to a tidal wave lifting technologies up with it: AI is surfing atop the crest; robotics is still standing at the water’s edge.
  • Whatever its causes, the lag in robotics could become a problem for AI. The two are deeply intertwined
  • But the biggest obstacle for roboticists—the factor at the core of Moravec’s paradox—is that the physical world is extremely complicated, far more so than languag
  • Some researchers are skeptical that a model trained on language alone, or even language and images, could ever achieve humanlike intelligence. “There’s too much that’s left implicit in language,” Ernest Davis, a computer scientist at NYU, told me. “There’s too much basic understanding of the world that is not specified.” The solution, he thinks, is having AI interact directly with the world via robotic bodies. But unless robotics makes some serious progress, that is unlikely to be possible anytime soon.
  • For years already, engineers have used AI to help build robots. In a more extreme, far-off vision, super-intelligent AIs could simply design their own robotic body. But for now, Finn told me, embodied AI is still a ways off. No android assassins. No humanoid helpers.
  • Set in the context of our current technological abilities, HAL’s murderous exchange with Dave from 2001: A Space Odyssey would read very differently. The machine does not refuse to help its human master. It simply isn’t capable of doing so.“Open the pod bay doors, HAL.”“I’m sorry, Dave. I’m afraid I can’t do that.”
Javier E

Opinion | It's 2086. This Is What American History Could Look Like. - The New York Times - 0 views

  • If it seems far-fetched that a notorious insurgent could be given such a place of honor, the past begs to differ. When the Confederate president, Jefferson Davis, was imprisoned after the Civil War (rumored to be dressed at the time of his arrest in his own outlandish costume), he was more reviled and mocked than any Capitol rioter, and his crimes far more serious. His statue joined George Washington’s in the Capitol 65 years later.
  • As curators at the Smithsonian’s National Museum of American History, we are regularly confronted by hard physical evidence of just how slippery the past can be.
  • It is chilling, but not impossible, to envision the signs screaming “Stop the steal!” picked up on the garbage-strewn National Mall on Jan. 7, 2021, treated one day as patriotic treasures, displayed alongside the writing desk Thomas Jefferson used to draft the Declaration of Independence or the inkwell Abraham Lincoln dipped into to compose the Emancipation Proclamation.
  • ...9 more annotations...
  • History, however, may have other plans. Contrary to the mantra, it has no right or wrong side.
  • Judging, it turns out, isn’t history’s strong suit. Notions of justice change radically over time, and they are not the reason we collect, preserve or display objects from the past
  • To curators and historians, the evolving meaning of our objects is far more fascinating than whom they label as unrighteous
  • The collections of the Smithsonian contain, for instance, pikes from John Brown’s failed slave rebellion in the South in 1859. At different moments since then, his pikes have symbolized a demented terrorist’s scheme for mass murder, a religious fanatic’s fiery crusade and a hero’s lonely struggle for justice.
  • Nothing in our past, no matter how blatant it may seem to us today, is guaranteed eternal condemnation
  • Our recent reckoning with American history has shown the indelible impact of staid forms of institutional power, like dedicating monuments, inscribing plaques and holding hearings. Enshrining rioters as heroes could be done fairly quietly.
  • There’s no controlling what the future will say about us. Generations just keep coming, re-evaluating old heroes and asking new questions.
  • We cannot know; we have no ownership over what is to come. The best we can do is map our moment scrupulously, to preserve the signposts that will lead to a place we’ll never see.
  • As curators, as historians, as citizens, we are frequently reminded that the past is a foreign country. But so is the future.
Javier E

Ozempic or Bust - The Atlantic - 0 views

  • June 2024 Issue
  • Explore
  • it is impossible to know, in the first few years of any novel intervention, whether its success will last.
  • ...77 more annotations...
  • The ordinary fixes—the kind that draw on people’s will, and require eating less and moving more—rarely have a large or lasting effect. Indeed, America itself has suffered through a long, maddening history of failed attempts to change its habits on a national scale: a yo-yo diet of well-intentioned treatments, policies, and other social interventions that only ever lead us back to where we started
  • Through it all, obesity rates keep going up; the diabetes epidemic keeps worsening.
  • The most recent miracle, for Barb as well as for the nation, has come in the form of injectable drugs. In early 2021, the Danish pharmaceutical company Novo Nordisk published a clinical trial showing remarkable results for semaglutide, now sold under the trade names Wegovy and Ozempic.
  • Patients in the study who’d had injections of the drug lost, on average, close to 15 percent of their body weight—more than had ever been achieved with any other drug in a study of that size. Wadden knew immediately that this would be “an incredible revolution in the treatment of obesity.”
  • Many more drugs are now racing through development: survodutide, pemvidutide, retatrutide. (Among specialists, that last one has produced the most excitement: An early trial found an average weight loss of 24 percent in one group of participants.
  • In the United States, an estimated 189 million adults are classified as having obesity or being overweight
  • The drugs don’t work for everyone. Their major side effects—nausea, vomiting, and diarrhea—can be too intense for many patients. Others don’t end up losing any weight
  • For the time being, just 25 percent of private insurers offer the relevant coverage, and the cost of treatment—about $1,000 a month—has been prohibitive for many Americans.
  • The drugs have already been approved not just for people with diabetes or obesity, but for anyone who has a BMI of more than 27 and an associated health condition, such as high blood pressure or cholesterol. By those criteria, more than 140 million American adults already qualify
  • if this story goes the way it’s gone for other “risk factor” drugs such as statins and antihypertensives, then the threshold for prescriptions will be lowered over time, inching further toward the weight range we now describe as “normal.”
  • How you view that prospect will depend on your attitudes about obesity, and your tolerance for risk
  • The first GLP-1 drug to receive FDA approval, exenatide, has been used as a diabetes treatment for more than 20 years. No long-term harms have been identified—but then again, that drug’s long-term effects have been studied carefully only across a span of seven years
  • the data so far look very good. “These are now being used, literally, in hundreds of thousands of people across the world,” she told me, and although some studies have suggested that GLP-1 drugs may cause inflammation of the pancreas, or even tumor growth, these concerns have not borne out.
  • adolescents are injecting newer versions of these drugs, and may continue to do so every week for 50 years or more. What might happen over all that time?
  • “All of us, in the back of our minds, always wonder, Will something show up?  ” Although no serious problems have yet emerged, she said, “you wonder, and you worry.”
  • in light of what we’ve been through, it’s hard to see what other choices still remain. For 40 years, we’ve tried to curb the spread of obesity and its related ailments, and for 40 years, we’ve failed. We don’t know how to fix the problem. We don’t even understand what’s really causing it. Now, again, we have a new approach. This time around, the fix had better work.
  • The fen-phen revolution arrived at a crucial turning point for Wadden’s field, and indeed for his career. By then he’d spent almost 15 years at the leading edge of research into dietary interventions, seeing how much weight a person might lose through careful cutting of their calories.
  • But that sort of diet science—and the diet culture that it helped support—had lately come into a state of ruin. Americans were fatter than they’d ever been, and they were giving up on losing weight. According to one industry group, the total number of dieters in the country declined by more than 25 percent from 1986 to 1991.
  • Rejecting diet culture became something of a feminist cause. “A growing number of women are joining in an anti-diet movement,” The New York Times reported in 1992. “They are forming support groups and ceasing to diet with a resolve similar to that of secretaries who 20 years ago stopped getting coffee for their bosses.
  • Now Wadden and other obesity researchers were reaching a consensus that behavioral interventions might produce in the very best scenario an average lasting weight loss of just 5 to 10 percent
  • National surveys completed in 1994 showed that the adult obesity rate had surged by more than half since 1980, while the proportion of children classified as overweight had doubled. The need for weight control in America had never seemed so great, even as the chances of achieving it were never perceived to be so small.
  • Wadden wasn’t terribly concerned, because no one in his study had reported any heart symptoms. But ultrasounds revealed that nearly one-third of them had some degree of leakage in their heart valves. His “cure for obesity” was in fact a source of harm.
  • In December 1994, the Times ran an editorial on what was understood to be a pivotal discovery: A genetic basis for obesity had finally been found. Researchers at Rockefeller University were investigating a molecule, later named leptin, that gets secreted from fat cells and travels to the brain, and that causes feelings of satiety. Lab mice with mutations in the leptin gene—importantly, a gene also found in humans—overeat until they’re three times the size of other mice. “The finding holds out the dazzling hope,”
  • In April 1996, the doctors recommended yes: Dexfenfluramine was approved—and became an instant blockbuster. Patients received prescriptions by the hundreds of thousands every month. Sketchy wellness clinics—call toll-free, 1-888-4FEN-FEN—helped meet demand. Then, as now, experts voiced concerns about access. Then, as now, they worried that people who didn’t really need the drugs were lining up to take them. By the end of the year, sales of “fen” alone had surpassed $300 million.
  • It was nothing less than an awakening, for doctors and their patients alike. Now a patient could be treated for excess weight in the same way they might be treated for diabetes or hypertension—with a drug they’d have to take for the rest of their life.
  • the article heralded a “new understanding of obesity as a chronic disease rather than a failure of willpower.”
  • News had just come out that, at the Mayo Clinic in Minnesota, two dozen women taking fen-phen—including six who were, like Barb, in their 30s—had developed cardiac conditions. A few had needed surgery, and on the operating table, doctors discovered that their heart valves were covered with a waxy plaque.
  • Americans had been prescribed regular fenfluramine since 1973, and the newer drug, dexfenfluramine, had been available in France since 1985. Experts took comfort in this history. Using language that is familiar from today’s assurances regarding semaglutide and other GLP-1 drugs, they pointed out that millions were already on the medication. “It is highly unlikely that there is anything significant in toxicity to the drug that hasn’t been picked up with this kind of experience,” an FDA official named James Bilstad would later say in a Time cover story headlined “The Hot New Diet Pill.
  • “I know I can’t get any more,” she told Williams. “I have to use up what I have. And then I don’t know what I’m going to do after that. That’s the problem—and that is what scares me to death.” Telling people to lose weight the “natural way,” she told another guest, who was suggesting that people with obesity need only go on low-carb diets, is like “asking a person with a thyroid condition to just stop their medication.”
  • She’d gone off the fen-phen and had rapidly regained weight. “The voices returned and came back in a furor I’d never heard before,” Barb later wrote on her blog. “It was as if they were so angry at being silenced for so long, they were going to tell me 19 months’ worth of what they wanted me to hear. I was forced to listen. And I ate. And I ate. And ate.”
  • For Barb, rapid weight loss has brought on a different metaphysical confusion. When she looks in the mirror, she sometimes sees her shape as it was two years ago. In certain corners of the internet, this is known as “phantom fat syndrome,” but Barb dislikes that term. She thinks it should be called “body integration syndrome,” stemming from a disconnect between your “larger-body memory” and “smaller-body reality.
  • In 2003, the U.S. surgeon general declared obesity “the terror within, a threat that is every bit as real to America as the weapons of mass destruction”; a few months later, Eric Finkelstein, an economist who studies the social costs of obesity, put out an influential paper finding that excess weight was associated with up to $79 billion in health-care spending in 1998, of which roughly half was paid by Medicare and Medicaid. (Later he’d conclude that the number had nearly doubled in a decade.
  • In 2004, Finkelstein attended an Action on Obesity summit hosted by the Mayo Clinic, at which numerous social interventions were proposed, including calorie labeling in workplace cafeterias and mandatory gym class for children of all grades.
  • he message at their core, that soda was a form of poison like tobacco, spread. In San Francisco and New York, public-service campaigns showed images of soda bottles pouring out a stream of glistening, blood-streaked fat. Michelle Obama led an effort to depict water—plain old water—as something “cool” to drink.
  • Soon, the federal government took up many of the ideas that Brownell had helped popularize. Barack Obama had promised while campaigning for president that if America’s obesity trends could be reversed, the Medicare system alone would save “a trillion dollars.” By fighting fat, he implied, his ambitious plan for health-care reform would pay for itself. Once he was in office, his administration pulled every policy lever it could.
  • Michelle Obama helped guide these efforts, working with marketing experts to develop ways of nudging kids toward better diets and pledging to eliminate “food deserts,” or neighborhoods that lacked convenient access to healthy, affordable food. She was relentless in her public messaging; she planted an organic garden at the White House and promoted her signature “Let’s Move!” campaign around the country.
  • An all-out war on soda would come to stand in for these broad efforts. Nutrition studies found that half of all Americans were drinking sugar-sweetened beverages every day, and that consumption of these accounted for one-third of the added sugar in adults’ diets. Studies turned up links between people’s soft-drink consumption and their risks for type 2 diabetes and obesity. A new strand of research hinted that “liquid calories” in particular were dangerous to health.
  • when their field lost faith in low-calorie diets as a source of lasting weight loss, the two friends went in opposite directions. Wadden looked for ways to fix a person’s chemistry, so he turned to pharmaceuticals. Brownell had come to see obesity as a product of our toxic food environment: He meant to fix the world to which a person’s chemistry responded, so he started getting into policy.
  • The social engineering worked. Slowly but surely, Americans’ lamented lifestyle began to shift. From 2001 to 2018, added-sugar intake dropped by about one-fifth among children, teens, and young adults. From the late 1970s through the early 2000s, the obesity rate among American children had roughly tripled; then, suddenly, it flattened out.
  • although the obesity rate among adults was still increasing, its climb seemed slower than before. Americans’ long-standing tendency to eat ever-bigger portions also seemed to be abating.
  • sugary drinks—liquid candy, pretty much—were always going to be a soft target for the nanny state. Fixing the food environment in deeper ways proved much harder. “The tobacco playbook pretty much only works for soda, because that’s the closest analogy we have as a food item,
  • that tobacco playbook doesn’t work to increase consumption of fruits and vegetables, he said. It doesn’t work to increase consumption of beans. It doesn’t work to make people eat more nuts or seeds or extra-virgin olive oil.
  • Careful research in the past decade has shown that many of the Obama-era social fixes did little to alter behavior or improve our health. Putting calorie labels on menus seemed to prompt at most a small decline in the amount of food people ate. Employer-based wellness programs (which are still offered by 80 percent of large companies) were shown to have zero tangible effects. Health-care spending, in general, kept going up.
  • From the mid-1990s to the mid-2000s, the proportion of adults who said they’d experienced discrimination on account of their height or weight increased by two-thirds, going up to 12 percent. Puhl and others started citing evidence that this form of discrimination wasn’t merely a source of psychic harm, but also of obesity itself. Studies found that the experience of weight discrimination is associated with overeating, and with the risk of weight gain over time.
  • obesity rates resumed their ascent. Today, 20 percent of American children have obesity. For all the policy nudges and the sensible revisions to nutrition standards, food companies remain as unfettered as they were in the 1990s, Kelly Brownell told me. “Is there anything the industry can’t do now that it was doing then?” he asked. “The answer really is no. And so we have a very predictable set of outcomes.”
  • she started to rebound. The openings into her gastric pouch—the section of her stomach that wasn’t bypassed—stretched back to something like their former size. And Barb found ways to “eat around” the surgery, as doctors say, by taking food throughout the day in smaller portions
  • Bariatric surgeries can be highly effective for some people and nearly useless for others. Long-term studies have found that 30 percent of those who receive the same procedure Barb did regain at least one-quarter of what they lost within two years of reaching their weight nadir; more than half regain that much within five years.
  • if the effects of Barb’s surgery were quickly wearing off, its side effects were not: She now had iron, calcium, and B12 deficiencies resulting from the changes to her gut. She looked into getting a revision of the surgery—a redo, more or less—but insurance wouldn’t cover it
  • She found that every health concern she brought to doctors might be taken as a referendum, in some way, on her body size. “If I stubbed my toe or whatever, they’d just say ‘Lose weight.’ ” She began to notice all the times she’d be in a waiting room and find that every chair had arms. She realized that if she was having a surgical procedure, she’d need to buy herself a plus-size gown—or else submit to being covered with a bedsheet when the nurses realized that nothing else would fit.
  • Barb grew angrier and more direct about her needs—You’ll have to find me a different chair, she started saying to receptionists. Many others shared her rage. Activists had long decried the cruel treatment of people with obesity: The National Association to Advance Fat Acceptance had existed, for example, in one form or another, since 1969; the Council on Size & Weight Discrimination had been incorporated in 1991. But in the early 2000s, the ideas behind this movement began to wend their way deeper into academia, and they soon gained some purchase with the public.
  • “Our public-health efforts to address obesity have failed,” Eric Finkelstein, the economist, told me.
  • Others attacked the very premise of a “healthy weight”: People do not have any fundamental need, they argued, morally or medically, to strive for smaller bodies as an end in itself. They called for resistance to the ideology of anti-fatness, with its profit-making arms in health care and consumer goods. The Association for Size Diversity and Health formed in 2003; a year later, dozens of scholars working on weight-related topics joined together to create the academic field of fat studies.
  • As the size-diversity movement grew, its values were taken up—or co-opted—by Big Business. Dove had recently launched its “Campaign for Real Beauty,” which included plus-size women. (Ad Age later named it the best ad campaign of the 21st century.) People started talking about “fat shaming” as something to avoid
  • By 2001, Bacon, who uses they/them pronouns, had received their Ph.D. and finished a rough draft of a book, Health at Every Size, which drew inspiration from a broader movement by that name among health-care practitioners
  • But something shifted in the ensuing years. In 2007, Bacon got a different response, and the book was published. Health at Every Size became a point of entry for a generation of young activists and, for a time, helped shape Americans’ understanding of obesity.
  • Some experts were rethinking their advice on food and diet. At UC Davis, a physiologist named Lindo Bacon who had struggled to overcome an eating disorder had been studying the effects of “intuitive eating,” which aims to promote healthy, sustainable behavior without fixating on what you weigh or how you look
  • The heightened sensitivity started showing up in survey data, too. In 2010, fewer than half of U.S. adults expressed support for giving people with obesity the same legal protections from discrimination offered to people with disabilities. In 2015, that rate had risen to three-quarters.
  • In Bacon’s view, the 2000s and 2010s were glory years. “People came together and they realized that they’re not alone, and they can start to be critical of the ideas that they’ve been taught,” Bacon told me. “We were on this marvelous path of gaining more credibility for the whole Health at Every Size movement, and more awareness.”
  • that sense of unity proved short-lived; the movement soon began to splinter. Black women have the highest rates of obesity, and disproportionately high rates of associated health conditions. Yet according to Fatima Cody Stanford, an obesity-medicine physician at Harvard Medical School, Black patients with obesity get lower-quality care than white patients with obesity.
  • That system was exactly what Bacon and the Health at Every Size movement had set out to reform. The problem, as they saw it, was not so much that Black people lacked access to obesity medicine, but that, as Bacon and the Black sociologist Sabrina Strings argued in a 2020 article, Black women have been “specifically targeted” for weight loss, which Bacon and Strings saw as a form of racism
  • But members of the fat-acceptance movement pointed out that their own most visible leaders, including Bacon, were overwhelmingly white. “White female dietitians have helped steal and monetize the body positive movement,” Marquisele Mercedes, a Black activist and public-health Ph.D. student, wrote in September 2020. “And I’m sick of it.”
  • Tensions over who had the standing to speak, and on which topics, boiled over. In 2022, following allegations that Bacon had been exploitative and condescending toward Black colleagues, the Association for Size Diversity and Health expelled them from its ranks and barred them from attending its events.
  • As the movement succumbed to in-fighting, its momentum with the public stalled. If attitudes about fatness among the general public had changed during the 2000s and 2010s, it was only to a point. The idea that some people can indeed be “fit but fat,” though backed up by research, has always been a tough sell.
  • Although Americans had become less inclined to say they valued thinness, measures of their implicit attitudes seemed fairly stable. Outside of a few cities such as San Francisco and Madison, Wisconsin, new body-size-discrimination laws were never passed.
  • In the meantime, thinness was coming back into fashion
  • In the spring of 2022, Kim Kardashian—whose “curvy” physique has been a media and popular obsession—boasted about crash-dieting in advance of the Met Gala. A year later, the model and influencer Felicity Hayward warned Vogue Business that “plus-size representation has gone backwards.” In March of this year, the singer Lizzo, whose body pride has long been central to her public persona, told The New York Times that she’s been trying to lose weight. “I’m not going to lie and say I love my body every day,” she said.
  • Among the many other dramatic effects of the GLP-1 drugs, they may well have released a store of pent-up social pressure to lose weight.
  • If ever there was a time to debate that impulse, and to question its origins and effects, it would be now. But Puhl told me that no one can even agree on which words are inoffensive. The medical field still uses obesity, as a description of a diagnosable disease. But many activists despise that phrase—some spell it with an asterisk in place of the e—and propose instead to reclaim fat.
  • Everyone seems to agree on the most important, central fact: that we should be doing everything we can to limit weight stigma. But that hasn’t been enough to stop the arguing.
  • Things feel surreal these days to just about anyone who has spent years thinking about obesity. At 71, after more than four decades in the field, Thomas Wadden now works part-time, seeing patients just a few days a week. But the arrival of the GLP-1 drugs has kept him hanging on for a few more years, he said. “It’s too much of an exciting period to leave obesity research right now.”
  • When everyone is on semaglutide or tirzepatide, will the soft-drink companies—Brownell’s nemeses for so many years—feel as if a burden has been lifted? “My guess is the food industry is probably really happy to see these drugs come along,” he said. They’ll find a way to reach the people who are taking GLP‑1s, with foods and beverages in smaller portions, maybe. At the same time, the pressures to cut back on where and how they sell their products will abate.
  • the triumph in obesity treatment only highlights the abiding mystery of why Americans are still getting fatter, even now
  • Perhaps one can lay the blame on “ultraprocessed” foods, he said. Maybe it’s a related problem with our microbiomes. Or it could be that obesity, once it takes hold within a population, tends to reproduce itself through interactions between a mother and a fetus. Others have pointed to increasing screen time, how much sleep we get, which chemicals are in the products that we use, and which pills we happen to take for our many other maladies.
  • “The GLP-1s are just a perfect example of how poorly we understand obesity,” Mozaffarian told me. “Any explanation of why they cause weight loss is all post-hoc hand-waving now, because we have no idea. We have no idea why they really work and people are losing weight.”
  • The new drugs—and the “new understanding of obesity” that they have supposedly occasioned—could end up changing people’s attitudes toward body size. But in what ways
  • When the American Medical Association declared obesity a disease in 2013, Rebecca Puhl told me, some thought “it might reduce stigma, because it was putting more emphasis on the uncontrollable factors that contribute to obesity.” Others guessed that it would do the opposite, because no one likes to be “diseased.”
  • why wasn’t there another kind of nagging voice that wouldn’t stop—a sense of worry over what the future holds? And if she wasn’t worried for herself, then what about for Meghann or for Tristan, who are barely in their 40s? Wouldn’t they be on these drugs for another 40 years, or even longer? But Barb said she wasn’t worried—not at all. “The technology is so much better now.” If any problems come up, the scientists will find solutions.
Javier E

Trump's anger at courts, frayed alliances could upend approach to judicial issues - The... - 0 views

  • Under the Trump administration, the GOP-controlled Senate confirmed 174 district court judges, 54 circuit court judges and three Supreme Court justices — shifting the balance of the highest court to a 6-3 conservative majority. During his campaign rallies and events, Trump often likes to highlight the total, though he has exaggerated it.
  • In a 2022 interview with The Washington Post, McConnell recalled that Trump’s first candidacy had worried many conservatives at the time but that his Supreme Court list and picks had calmed their nerves and that his bargain with Trump had moved the country “right of center.”
  • McConnell and Trump have not spoken since late 2020, and Trump has repeatedly called for McConnell to be removed as the GOP leader of the Senate.
  • ...13 more annotations...
  • Trump and Leo, a prominent conservative lawyer influential in his first term, have not spoken since 2020, according to people familiar with the matter. Their relationship ended over a heated fight in 2020 at Mar-a-Lago, where Trump accused Leo of picking Rod J. Rosenstein to be deputy attorney general, a person familiar with the matter said. Trump’s anger around Rosenstein centered on his decision to appoint special counsel Robert S. Mueller III to oversee the Justice Department’s probe of Russian interference in the 2016 election
  • Trump has signaled that he wants the Justice Department to go after his political opponents, and his associates have drafted plans to invoke the Insurrection Act on his first day in office, which would allow him to send the military against civil demonstrations. Near the end of his time in the White House, he repeatedly complained that his White House Counsel’s Office wasn’t doing enough to help him overturn the election results. His attorney general resigned after he would not back up his claims.
  • “He’s the leading candidate, so I don’t know that it matters what I think,” said Brent O. Hatch, a lawyer who is on the board of the Federalist Society.
  • Although Trump reshaped the Supreme Court while in office, leading to the overturning of Roe, he has sometimes told others that the decision is a political albatross for Republicans. And he has complained recently at rallies about the Supreme Court and the decisions the judges make, saying without evidence they rule too often against Republicans to show “independence.”
  • Trump is running on a campaign focused, at least in part, on vengeance and retribution. The former president has made it clear that loyalty would be a key criteria in how he makes decisions if returned to office.
  • Most members of the Federalist Society board of directors declined to comment on the record or did not respond to a request for comment. Interviews with a dozen other prominent lawyers suggested most had serious misgivings about Trump returning to power but were resigned to the high likelihood he will be the nominee, and many expressed openness to working for another Trump administration.
  • There is a heated debate underway in conservative legal circles about how GOP lawyers should interact with what increasingly appears to be the likely nominee, according to conservative lawyers who described the private talks on the condition of anonymity. The discussions include whether they would return to work for Trump.
  • One prominent lawyer described a November dinner he attended where almost all the attorneys in the room said they would prefer another nominee — but were split on whether to back Trump if he wins
  • Leo, McConnell and McGahn have expressed reservations about what another Trump term would look like, though they have largely stayed away from a public fight.
  • Some of the informal conversations and debates underway in conservative legal circles about a second Trump term include Project 2025, a coalition of right-wing groups that has outlined plans for the next Republican administration. Clark, who is working on the Insurrection Act for Project 2025, has been charged with violating Georgia’s anti-racketeering law, in the case alleging Trump and co-conspirators of interfering in the 2020 election. Clark pleaded guilty.
  • The involvement of Clark with that effort has alarmed some other conservative lawyers who view him as a potentially disastrous choice to take a senior leadership role at the department because of his past activities around the 2020 election.
  • Rob Kelner, a prominent conservative lawyer, said more conservative lawyers should have spoken up against Trump, but that it would cost them business and relationships.
  • “There were so many positions he took and so many statements that he made that flatly contradicted the foundational principles of the conservative movement and the Federalist Society, and yet it was so rare to hear conservative lawyers speak out against Trump,” Kelner said.
Javier E

Stanford's top disinformation research group collapses under pressure - The Washington ... - 0 views

  • The collapse of the five-year-old Observatory is the latest and largest of a series of setbacks to the community of researchers who try to detect propaganda and explain how false narratives are manufactured, gather momentum and become accepted by various groups
  • It follows Harvard’s dismissal of misinformation expert Joan Donovan, who in a December whistleblower complaint alleged he university’s close and lucrative ties with Facebook parent Meta led the university to clamp down on her work, which was highly critical of the social media giant’s practices.
  • Starbird said that while most academic studies of online manipulation look backward from much later, the Observatory’s “rapid analysis” helped people around the world understand what they were seeing on platforms as it happened.
  • ...9 more annotations...
  • Brown University professor Claire Wardle said the Observatory had created innovative methodology and trained the next generation of experts.
  • “Closing down a lab like this would always be a huge loss, but doing so now, during a year of global elections, makes absolutely no sense,” said Wardle, who previously led research at anti-misinformation nonprofit First Draft. “We need universities to use their resources and standing in the community to stand up to criticism and headlines.”
  • The study of misinformation has become increasingly controversial, and Stamos, DiResta and Starbird have been besieged by lawsuits, document requests and threats of physical harm. Leading the charge has been Rep. Jim Jordan (R-Ohio), whose House subcommittee alleges the Observatory improperly worked with federal officials and social media companies to violate the free-speech rights of conservatives.
  • In a joint statement, Stamos and DiResta said their work involved much more than elections, and that they had been unfairly maligned.
  • “The politically motivated attacks against our research on elections and vaccines have no merit, and the attempts by partisan House committee chairs to suppress First Amendment-protected research are a quintessential example of the weaponization of government,” they said.
  • Stamos founded the Observatory after publicizing that Russia has attempted to influence the 2016 election by sowing division on Facebook, causing a clash with the company’s top executives. Special counsel Robert S. Mueller III later cited the Facebook operation in indicting a Kremlin contractor. At Stanford, Stamos and his team deepened his study of influence operations from around the world, including one it traced to the Pentagon.
  • Stamos told associates he stepped back from leading the Observatory last year in part because the political pressure had taken a toll. Stamos had raised most of the money for the project, and the remaining faculty have not been able to replicate his success, as many philanthropic groups shift their focus on artificial intelligence and other, fresher topics.
  • In supporting the project further, the university would have risked alienating conservative donors, Silicon Valley figures, and members of Congress, who have threatened to stop all federal funding for disinformation research or cut back general support.
  • The Observatory’s non-election work has included developing curriculum for teaching college students about how to handle trust and safety issues on social media platforms and launching the first peer-reviewed journal dedicated to that field. It has also investigated rings publishing child sexual exploitation material online and flaws in the U.S. system for reporting it, helping to prepare platforms to handle an influx of computer-generated material.
« First ‹ Previous 221 - 235 of 235
Showing 20 items per page