Skip to main content

Home/ History Readings/ Group items tagged Testimony

Rss Feed Group items tagged

Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Why Is Stanley Fish Teaching at Florida's New College? - 0 views

  • Given how controversial New College is, why do you want to teach there now?Well, the simple nitty gritty reason is that I’m 85 years old, and someone who asks me to teach courses is a godsend. So I responded affirmatively.
  • t first I wanted to ask about Ralston College, in Savannah, Ga., which you’ve been involved with at the planning stage, and which seems to promise a kind of great books or neotraditional education.
  • It took about a decade of fundraising and planning and gift-giving for the college to begin but it’s now in operation. I was there less than a year ago, giving a lecture and talking to students and faculty members. I gave a talk about hate speech and free speech. And the morning before the talk, I attended a class on Homer, the Iliad. What was amazing about it was that not only was the Iliad being read in the original Greek, but the conversations between the students and the faculty member were being conducted in Greek. And six months before this course began, no student in it — and there were about 25 — had any knowledge whatsoever of the Greek language or Greek culture.
  • ...10 more annotations...
  • Yes, that’s right. And the discussion was very precise about details of the verse and how it worked, and how various words interacted with one another or were opposed to one another.
  • Not that I was able to participate! I wish I could. I took a little Greek 110 years ago and have long since forgotten it, but it was inspiring. These people were thoroughly engaged.
  • So that itself is an amazing piece of evidence. One might call it a piece of testimony.It seems almost impossible.
  • How did you know, if it was in Greek?Oh, I could tell that much. There’s a certain kind of gesturing with respect to texts that is known to any of us who have worked with texts for a while.
  • It’s been my mission, notably unsuccessful, for many years to make people understand that academic work, including in your writing and in your classes, is one thing and political work is another, and that the two should not be confused nor should they be intermingled. You can have any number of political issues brought into the classroom so long as they are brought into the classroom as objects of analysis or description and not as agendas either to be embraced or rejected. That’s what I’ve been arguing, one might even say preaching, for a long time.
  • I don’t want my classroom, or any classroom in a college or university that I’m teaching in, to be thought of as the vehicle of some program or agenda, no matter how virtuous it might be. Virtue is not the business of the academy
  • You have a famously minimalist definition of academic freedom — “Academics are not free in any special sense to do anything but their jobs,” as you write in Versions of Academic Freedom. Minimalist and correct.
  • When I was a dean at the College of Liberal Arts and Sciences at the University of Illinois at Chicago, I helped implement and inaugurate the first Native American-studies program at the University of Illinois. And I spoke at the inaugural luncheon. What I told them was, “It is without doubt the case that activism of a variety of kinds is what brought you to this point.” There wouldn’t now be a Native American-studies program at UIC if activists of a polemical kind weren’t working toward that end. “Now I want you,” I said, “to forget the history that brought you here, because now that you’re part of a university setting, you’re no longer activists, you’re academics. If you become or continue to be activists, the academics in the university will have a derisory view of you.”
  • It’s implausible to me that a dissertation student studying with, say, Judith Butler or Fredric Jameson is not, by definition, imbibing methods that are politically normative but also very valuable. A lot of critical traditions, in gender studies or in Marxian literary criticism — or in say, Straussian political theory — are entwined with normative political or ideological commitments. There’s no way to expel those commitments from a vibrant department of the humanities.
  • Well, you don’t have to expel them. The question you have to ask is, Are they primary in the minds of those who are teaching in the classrooms? If we’re in a community that has a certain set of standards and modes of operation, what we want to do is be faithful to those standards and modes of operation. And if now and then those deeper commitments kind of seep through, well, yes, that’s inevitable. But that’s quite different from having an ideologically centered classroom.
lilyrashkind

They Did Their Own 'Research.' Now What? - The New York Times - 0 views

  • Cryptocurrencies are notoriously volatile, but this wasn’t your average down day: People who thought they knew what they were getting into had, in the space of 24 hours, lost nearly everything. Messages of desperation flooded a Reddit forum for traders of one of the currencies, a coin called Luna, prompting moderators to share phone numbers for international crisis hotlines. Some posters (or “Lunatics,” as the currency’s creator, Do Kwon, has referred to them) shared hope for a turnaround or bailout; most were panicking, mourning and seeking advice.
  • But in the context of a broad collapse of trust in institutions and the experts who speak for them, it has come to mean something more specific. A common refrain in battles about Covid-19 and vaccination, politics and conspiracy theories, parenting, drugs, food, stock trading and media, it signals not just a rejection of authority but often trust in another kind.
  • DYOR is an attitude, if not quite a practice, that has been adopted by some athletes, musicians, pundits and even politicians to build a sort of outsider credibility. “Do your own research” is an idea central to Joe Rogan’s interview podcast, the most listened to program on Spotify, where external claims of expertise are synonymous with admissions of malice. In its current usage, DYOR is often an appeal to join in, rendered in the language of opting out.Nowhere are the contradictions of DYOR on such vivid display as in the world of crypto, where the phrase is a rallying cry, a disclaimer, a meme and a joke — an invitation to a community as well as a reminder of its harsh limits.
  • ...5 more annotations...
  • Melissa Carrion, a professor at the University of Nevada, Las Vegas, who studies the rhetoric of health and medicine, spoke to 50 mothers who had refused one or more vaccines for their children for a study published in 2017.“Across the board, every single one of them gave some variation of the advice that a mother ‘should do her own research,’” she said in a phone interview. “It was this kind of worldview that was less about the result of the research than the individual process of doing it themselves.”
  • One of the enticing aspects of cryptocurrencies, which pose an alternative to traditional financial institutions, is that expertise is available to anyone who wants to claim it. There are people who’ve gotten rich, people who know a lot about blockchains and people who believe in the liberating power of digital currencies. There is some recent institutional interest. But nobody’s been around very long, which makes the idea of “researching” your way to prosperity feel more credible.
  • Cryptocurrency trading, in contrast to medicine, might represent DYOR in pure no-expert form. Virtually everyone is operating in a beginners’ bubble, whether they’re worried about it or not, betting with and against one another, in hopes of making money.
  • ere, so-called research materials are often limited to a white paper, marketing materials and testimonials, the “due diligence” posts of others, the reputations of a currency’s creators and the general sentiment of other possible buyers. Will they buy-in, too? Will we take this coin to the moon?In that way — the momentum of a group — crypto investing isn’t altogether distinct from how people have invested in the stock market for decades. Though here it is tinged with a rebellious, anti-authoritarian streak: We’re outsiders, in this together; we’re doing something sort of ridiculous, but also sort of cool. Though DYOR may be used to foster a sense of community, what it actually describes is participation in a market.
  • A year ago, Luna boosters (and a few skeptics) in online forums offered the same advice to gathered audiences of potential buyers reading their posts, looking for tips: just DYOR. Thousands invested in both Luna and TerraUSD. The price of Luna climbed from around $5 to over $100. After the crash, at least one Reddit user suggested that the situation highlighted the “limit” of DYOR; the coin’s price had fallen to nearly zero.
lilyrashkind

Georgia district attorney investigating Trump has subpoenaed officials from secretary o... - 0 views

  • An Atlanta-area district attorney investigating Donald Trump's efforts to overturn the 2020 election results has subpoenaed half a dozen officials from the Georgia secretary of state's office, according to copies of the documents obtained by CNN. The flurry of activity comes as a special grand jury is set to begin its work of investigating the former President and his allies on June 1. A person familiar with the investigation said secretary of state officials are not alone in receiving subpoenas in recent weeks, as the Fulton County district attorney's office has ramped up its investigative activity.
  • The subpoenas call for the witnesses to testify on dates from early to mid-June. Raffensperger, who has previously said he would comply with a subpoena, appears slated to be one of the first witnesses to testify on June 2. His call with Trump -- in which the former President pressured Raffensperger to "find" the votes needed for Trump to win Georgia -- lies at the heart of the Georgia probe.
  • Willis, meantime, has said she's not limiting her investigation to Trump's infamous call with Raffensperger. She has cast a wide net -- looking at Georgia's fake electors, former Trump lawyer Rudy Giuliani's conspiracy-ridden presentation to state lawmakers and other issues -- as she tries to determine whether Trump and his allies engaged in a broad criminal conspiracy to try to swing the Peach State to Trump's column.
  • ...1 more annotation...
  • The Atlanta Journal-Constitution also reported that one of its political reporters who covered the 2020 election, Greg Bluestein, has been told he should expect to receive a subpoena. The newspaper's managing editor, Shawn McIntosh, has said they would try to have any subpoena for Bluestein dismissed. She did not immediately respond to CNN's request for comment.CNN's Ryan Nobles contributed to this report.
criscimagnael

In Patrice Nganang's Trilogy, Cameroon's Past Is Still Very Present - The New York Times - 0 views

  • The book, “A Trail of Crab Tracks,” explores the birth of independent Cameroon in the 1960s and its subsequent descent into civil war. Nganang, 52, wanted to get the details just right, from the experience of guerrilla fighters in the jungle to the names of plants and local rivers.“I was very careful,” Nganang said last month as a torrential spring rain fell outside his New Jersey home. “I didn’t want an older person to read it and say, ‘Come on, my son. It’s not right!’” His laughter, like a thunderclap, filled the room.
  • “The government was declaring war and cracking down on Anglophone protests,” he says. “They hadn’t started killing yet.”
  • Finally released and expelled from the country, Nganang emerged with a deeper commitment to his overarching project: an examination of Cameroon’s national identity, and how it has held within it the seeds of both great promise and disappointment.
  • ...8 more annotations...
  • Now, he adds, “there are 20,000 people dead.”
  • “The dream of Cameroon is contradictory,” he continues, “because on the one hand you have this violence and brutality and yet you also have this utopian idea: ‘Let’s dream big!’ That contradiction always inspired me, and I think it’s a reflection of the Cameroonian character, maybe the African character.”
  • “The three novels are like a cake: tripartite,” Nganang says. “They are very much complex, but I also wanted them to be entertaining.”
  • The family story mirrors a national history of silences and betrayals: The two are inevitably, and tragically, linked. “The Cameroonian soul is a battlefield,”
  • There is another overlap, too. The name “Tanou,” he says, means “father of history”
  • “I would never have written this book if I hadn’t been on social media,” he says, describing the countless testimonials that Cameroonians around the world have shared with him, which have fueled his posts and informed his novel. “It changed me and changed the landscape of my writing because it made it possible for people to actually hear what I want to say.”
  • Social media has certainly given Nganang a significant platform from afar on the country’s issues,
  • “History is the backbone of everything we do, everything that happens,” he says. “Are things going to change? Like Yaoundé’s neighborhoods, what is poor today was rich yesterday and what is rich now was once poor. The reality of Cameroon, and Africa in general, is that nothing is forever.”
Javier E

I Thought I Was Saving Trans Kids. Now I'm Blowing the Whistle. - 0 views

  • Another disturbing aspect of the center was its lack of regard for the rights of parents—and the extent to which doctors saw themselves as more informed decision-makers over the fate of these children.
  • when there was a dispute between the parents, it seemed the center always took the side of the affirming parent.
  • no matter how much suffering or pain a child had endured, or how little treatment and love they had received, our doctors viewed gender transition—even with all the expense and hardship it entailed—as the solution.
  • ...27 more annotations...
  • Besides teenage girls, another new group was referred to us: young people from the inpatient psychiatric unit, or the emergency department, of St. Louis Children’s Hospital. The mental health of these kids was deeply concerning—there were diagnoses like schizophrenia, PTSD, bipolar disorder, and more. Often they were already on a fistful of pharmaceuticals.
  • Being put on powerful doses of testosterone or estrogen—enough to try to trick your body into mimicking the opposite sex—-affects the rest of the body. I doubt that any parent who's ever consented to give their kid testosterone (a lifelong treatment) knows that they’re also possibly signing their kid up for blood pressure medication, cholesterol medication, and perhaps sleep apnea and diabetes. 
  • There are rare conditions in which babies are born with atypical genitalia—cases that call for sophisticated care and compassion. But clinics like the one where I worked are creating a whole cohort of kids with atypical genitals—and most of these teens haven’t even had sex yet. They had no idea who they were going to be as adults. Yet all it took for them to permanently transform themselves was one or two short conversations with a therapist.
  • Other girls were disturbed by the effects of testosterone on their clitoris, which enlarges and grows into what looks like a microphallus, or a tiny penis. I counseled one patient whose enlarged clitoris now extended below her vulva, and it chafed and rubbed painfully in her jeans. I advised her to get the kind of compression undergarments worn by biological men who dress to pass as female. At the end of the call I thought to myself, “Wow, we hurt this kid.”
  • How little patients understood what they were getting into was illustrated by a call we received at the center in 2020 from a 17-year-old biological female patient who was on testosterone. She said she was bleeding from the vagina. In less than an hour she had soaked through an extra heavy pad, her jeans, and a towel she had wrapped around her waist. The nurse at the center told her to go to the emergency room right away.
  • We found out later this girl had had intercourse, and because testosterone thins the vaginal tissues, her vaginal canal had ripped open. She had to be sedated and given surgery to repair the damage. She wasn’t the only vaginal laceration case we heard about.
  • Bicalutamide is a medication used to treat metastatic prostate cancer, and one of its side effects is that it feminizes the bodies of men who take it, including the appearance of breasts. The center prescribed this cancer drug as a puberty blocker and feminizing agent for boys. As with most cancer drugs, bicalutamide has a long list of side effects, and this patient experienced one of them: liver toxicity. He was sent to another unit of the hospital for evaluation and immediately taken off the drug. Afterward, his mother sent an electronic message to the Transgender Center saying that we were lucky her family was not the type to sue.
  • Here’s an example. On Friday, May 1, 2020, a colleague emailed me about a 15-year-old male patient: “Oh dear. I am concerned that [the patient] does not understand what Bicalutamide does.” I responded: “I don’t think that we start anything honestly right now.”
  • There are no reliable studies showing this. Indeed, the experiences of many of the center’s patients prove how false these assertions are. 
  • Many encounters with patients emphasized to me how little these young people understood the profound impacts changing gender would have on their bodies and minds. But the center downplayed the negative consequences, and emphasized the need for transition. As the center’s website said, “Left untreated, gender dysphoria has any number of consequences, from self-harm to suicide. But when you take away the gender dysphoria by allowing a child to be who he or she is, we’re noticing that goes away. The studies we have show these kids often wind up functioning psychosocially as well as or better than their peers.” 
  • When a female takes testosterone, the profound and permanent effects of the hormone can be seen in a matter of months. Voices drop, beards sprout, body fat is redistributed. Sexual interest explodes, aggression increases, and mood can be unpredictable. Our patients were told about some side effects, including sterility. But after working at the center, I came to believe that teenagers are simply not capable of fully grasping what it means to make the decision to become infertile while still a minor.
  • To begin transitioning, the girls needed a letter of support from a therapist—usually one we recommended—who they had to see only once or twice for the green light. To make it more efficient for the therapists, we offered them a template for how to write a letter in support of transition. The next stop was a single visit to the endocrinologist for a testosterone prescription. 
  • The doctors privately recognized these false self-diagnoses as a manifestation of social contagion. They even acknowledged that suicide has an element of social contagion. But when I said the clusters of girls streaming into our service looked as if their gender issues might be a manifestation of social contagion, the doctors said gender identity reflected something innate.
  • Frequently, our patients declared they had disorders that no one believed they had. We had patients who said they had Tourette syndrome (but they didn’t); that they had tic disorders (but they didn’t); that they had multiple personalities (but they didn’t).
  • The girls who came to us had many comorbidities: depression, anxiety, ADHD, eating disorders, obesity. Many were diagnosed with autism, or had autism-like symptoms. A report last year on a British pediatric transgender center found that about one-third of the patients referred there were on the autism spectrum.
  • This concerned me, but didn’t feel I was in the position to sound some kind of alarm back then. There was a team of about eight of us, and only one other person brought up the kinds of questions I had. Anyone who raised doubts ran the risk of being called a transphobe. 
  • I certainly saw this at the center. One of my jobs was to do intake for new patients and their families. When I started there were probably 10 such calls a month. When I left there were 50, and about 70 percent of the new patients were girls. Sometimes clusters of girls arrived from the same high school. 
  • Until 2015 or so, a very small number of these boys comprised the population of pediatric gender dysphoria cases. Then, across the Western world, there began to be a dramatic increase in a new population: Teenage girls, many with no previous history of gender distress, suddenly declared they were transgender and demanded immediate treatment with testosterone. 
  • Soon after my arrival at the Transgender Center, I was struck by the lack of formal protocols for treatment. The center’s physician co-directors were essentially the sole authority.
  • At first, the patient population was tipped toward what used to be the “traditional” instance of a child with gender dysphoria: a boy, often quite young, who wanted to present as—who wanted to be—a girl. 
  • During the four years I worked at the clinic as a case manager—I was responsible for patient intake and oversight—around a thousand distressed young people came through our doors. The majority of them received hormone prescriptions that can have life-altering consequences—including sterility. 
  • I left the clinic in November of last year because I could no longer participate in what was happening there. By the time I departed, I was certain that the way the American medical system is treating these patients is the opposite of the promise we make to “do no harm.” Instead, we are permanently harming the vulnerable patients in our care.
  • Today I am speaking out. I am doing so knowing how toxic the public conversation is around this highly contentious issue—and the ways that my testimony might be misused. I am doing so knowing that I am putting myself at serious personal and professional risk.
  • Almost everyone in my life advised me to keep my head down. But I cannot in good conscience do so. Because what is happening to scores of children is far more important than my comfort. And what is happening to them is morally and medically appalling.
  • For almost four years, I worked at The Washington University School of Medicine Division of Infectious Diseases with teens and young adults who were HIV positive. Many of them were trans or otherwise gender nonconforming, and I could relate: Through childhood and adolescence, I did a lot of gender questioning myself. I’m now married to a transman, and together we are raising my two biological children from a previous marriage and three foster children we hope to adopt. 
  • The center’s working assumption was that the earlier you treat kids with gender dysphoria, the more anguish you can prevent later on. This premise was shared by the center’s doctors and therapists. Given their expertise, I assumed that abundant evidence backed this consensus. 
  • All that led me to a job in 2018 as a case manager at The Washington University Transgender Center at St. Louis Children's Hospital, which had been established a year earlier. 
Javier E

Conservative Media Pay Little Attention to Revelations About Fox News - The New York Times - 0 views

  • Fox News and its sister network, Fox Business, have avoided the story. Newsmax and One America News, Fox’s rivals on the right, have steered clear, too. So have a constellation of right-wing websites and podcasts.
  • Over the past two weeks, legal filings containing private messages and testimony from Fox hosts and executives revealed that many of them had serious doubts that Democrats stole the 2020 presidential election through widespread voter fraud, even as those claims were made repeatedly on Fox’s shows.
  • On 26 of the most popular conservative television news networks, radio shows, podcasts and websites, only four — National Review, Townhall, The Federalist and Breitbart News — have mentioned the private messages from Fox News hosts that disparaged election fraud claims since Feb. 16
  • ...13 more annotations...
  • “Choosing not to do stories is a form of bias,”
  • Four outlets mentioned the lawsuit in some way, but did not mention the comments from Fox News hosts. One of those, The Gateway Pundit, published three articles that included additional unfounded allegations about Dominion, including a suggestion that security vulnerabilities at one election site using Dominion machines could have led to some fraud, despite no evidence that votes were mismanaged.
  • The majority — 18 in all, including Fox News itself — did not cover the lawsuit at all with their own staff.
  • The lone on-air mention of the case on Fox News has been by Howard Kurtz, who hosts the weekly Fox News show “MediaBuzz.” He addressed the Dominion case on the air this week, telling viewers: “I believe I should be covering it.”“But,” he continued, “the company has decided as part of the organization being sued, I can’t talk about it or write about it, at least for now. I strongly disagree with that decision, but as an employee I have to abide by it.”
  • Mainstream news organizations often report on themselves when they are at the center of a scandal, Mr. Rosenstiel said, because they get “much more credit when they expose the lens on themselves as aggressively as they would anyone else.”
  • “The things you ignore and the things you choose to highlight are an important part of how you show whether you are a serious news organization.”
  • There are no legal orders barring media organizations from covering lawsuits they are involved in. And Mr. Rosenstiel pointed to a long history of past suits and scandals covered by the news organizations involved. The Washington Post, for example, ran a deeply reported article on how and why a reporter had made up a character in an article that won a Pulitzer Prize in 1981. The prize had been withdrawn a few days earlier after the fraud was uncovered
  • Fox’s lawyers might fear that anything said on the air could be used against the company at the trial
  • “From an ethical perspective, I’d say it’s a real disservice to their viewers on Fox not to be covering this,” she said.
  • Newsmax and The Washington Examiner — two of the four outlets reviewed by The Times that mentioned Dominion’s lawsuit but not the specific comments by Fox News’s hosts — have focused on Rupert Murdoch’s private messages, including that he saw Newsmax as a potential threat to Fox News
  • The hosts’ comments have also not been a focus of users on right-wing social media. Instead, many users on sites like Gab and Truth Social accused Mr. Murdoch of disloyalty to former President Donald J. Trump
  • One of the articles by The Gateway Pundit that advanced voter fraud narratives about Dominion was the most-shared story about the case on right-wing social media, according to data from Pyrra Technologies, a company that monitors the right-wing internet.
  • When users on right-wing social networks discussed the Fox News hosts, many criticized Mr. Carlson, Mr. Hannity and others for not fully believing the election fraud lies they appeared to endorse, Pyrra found.
criscimagnael

Biden Will Call for More Limits on Social Media in State of the Union Address - The New... - 0 views

  • President Biden will call in his Tuesday night address for limits on potentially harmful interactions between children and social media platforms.
  • He will ask Congress to ban targeted ads aimed at children on social media sites,
  • In turn, the critics say that young people can be fed increasingly extreme content or posts that diminish their self-worth.
  • ...3 more annotations...
  • the platforms “should be required to prioritize and ensure” the safety and health of young people, including when they make design choices for their product, according to a fact sheet. And he will call for more research into how social media affects mental health and new scrutiny of the algorithms that often determine what someone sees online.
  • One of the guests joining the first lady, Jill Biden, for the speech will be Frances Haugen, a former Facebook employee who leaked documents that, among other things, showed that some teenagers said Instagram made them feel worse about themselves.
  • But the United States lags behind many of its allies in taking concrete steps to shield children from extreme posts, addicting content and data collection online. Last year, new guidelines took effect in the United Kingdom that push platforms to limit the data they gather on young people, prompting several companies to implement more child safety features.
lilyrashkind

Judge Jackson takes empathetic approach to impartiality: ANALYSIS - ABC News - 0 views

  • Supreme Court nominee Ketanji Brown Jackson never uttered the word 'empathy' in nearly 19 hours of testimony before the Senate Judiciary Committee this week, but she effectively made clear it's a hallmark of her style and an asset to judicial credibility
  • Jackson also insisted it has no influence on her legal decisions."I am not importing my personal views or policy preferences," she told the committee. "The entire exercise is about trying to understand what those who created this policy or this law intended."
  • What Judge Jackson and her supporters tout as a selling point, Republican critics call a major liability.
  • ...7 more annotations...
  • Republican Sen. Thom Tillis of North Carolina told her, "it seems as though you're a very kind person and there's at least a level of empathy that enters into your treatment of a defendant.""Maybe beyond what some of us would be comfortable with with respect to administering justice," Tillis added.
  • The partisan clash over empathy -- which some have dubbed the "Empathy Wars" -- has its roots in a campaign promise by Barack Obama more than 15 years ago, when the then presidential candidate made the quality a key criteria for a high court nominee.
  • "My attempts to communicate directly with defendants is about public safety," Jackson told Tillis, who scrutinized her treatment of child porn offenders, "because most of the people who are incarcerated via the federal system, and even via the state system, will come out, will be a part of our communities again."
  • "I just don't understand why after saying this and believing this, you could give this guy three months in prison," said Sen. Josh Hawley, R-Missouri, who spent the entirety of his time questioning Jackson's below-guidelines sentence in a child porn case involving an 18-year-old offender. "Do you have anything to add?""No, senator," Jackson shot back.
  • Having empathy on the high court was once widely considered a vaunted quality. Justice Stephen Breyer, whom Jackson would succeed, called empathy "a crucial quality [to have] in a judge."Justice Anthony Kennedy, a Ronald Reagan appointee, said in 2013 that empathy requires "caution" but that cases are "stories about real people" and that judges must understand "real people are going to be bound by what you do."
  • But other jurists take a broader view."Wisdom, as opposed to the more narrow empathy, is a foundational requirement throughout our legal system," said Sarah Isgur, a former Justice Department lawyer and ABC News legal analyst."A judicial philosophy may have empathy as one element of it, but it strives to treat similar situations alike by creating a framework to determine which cases are similar and which aren't," Isgur said. "Judge Jackson was never able to articulate a judicial philosophy and without one, empathy can actually be the antithesis of justice."
  • "In my capacity as a justice, I would do what I've done for the past decade, which is to rule from a position of neutrality, to look carefully at the facts and the circumstances of every case, without any agendas, without any attempt to push the law in one direction or the other," Jackson said, "and to render rulings that I believe and that I hope that people would have
lilyrashkind

Ketanji Brown Jackson: Key takeaways from the Supreme Court confirmation hearings - CNN... - 0 views

  • Judge Ketanji Brown Jackson spent three days in front of the Senate Judiciary Committee -- two of them marathon sessions of questioning -- where she described herself as an impartial and transparent jurist, while taking a calm but forceful tone to push back at GOP claims about her record. The dueling themes that Democrats and Republicans wanted to present about her nomination were punched up in a final day of testimony from outside witnesses Thursday.
  • While she may pick up a few Republican votes, several GOP senators have sought to paint her as a soft on crime, "activist" judge, as they've used her hearings to showcase their messaging themes against Democrats heading into November's midterms.
  • "I am here, standing on the shoulders of generations of Americans who never had anything close to this kind of opportunity," Jackson said Tuesday. She highlighted how her grandparents received little formal education and that her parents went to segregated lower schools in Miami, before studying at Howard University.
  • ...9 more annotations...
  • As the Senate's questioning was close to winding up Wednesday, Jackson -- at the request of California Democratic Sen. Alex Padilla -- reflected on what message she'd give to young people feeling doubtful of their own abilities as they watched her ascent. She recalled feeling out of place and homesick during her first semester at Harvard University as an undergraduate
  • Coming out of the hearings, Democrats were insistent as ever that Jackson belonged on America's highest court and that they intended to put her there. "She will be confirmed. She will be a star on the Supreme Court," Sen. Patrick Leahy, a Vermont Democrat, said after Wednesday's hearing. "And I for one will proudly cast my vote for her."
  • In the lead-up to the hearings, Republicans previewed a "dignified" approach to the nominee that would be "respectful" in tone and "substantive" in content.
  • "underscores the dangers of the kind of progressive education that we are hearing about."
  • Several of Jackson's harshest questioners are believed to be in contention for a 2024 presidential run. Other talking points GOP has forecast for the 2022 midterm campaign also made their way into the questioning. Cruz badgered her about "critical race theory" -- an academic discipline that looks at system racism, even as Jackson insisted it plays no role in how she approaches judging. At one point he grilled her on the presence of the children's book "Antiracist Baby" in the curriculum of the private school for which Jackson serves on the board.
  • The proceedings were at their ugliest in the lines of Republican inquiry focused on the sentences Jackson handed down in select set of child pornography cases. Republicans argued that she was unduly lenient towards those offenders -- a claim at odds with the fact that her record is mostly in line with how judges typically approach these cases.
  • The Republicans said that they were disappointed she didn't identify a specific judicial philosophy -- like the originalism or textualism strains favored by conservatives -- that she followed. But just as notable was the distance she put between herself and the judicial approaches that had typically been heralded by progressives.
  • Republicans make a case for the Supreme Court to revisit Roe, same-sex marriage and other key rulings
  • He suggested that the legal basis for that ruling -- a concept known as substantive due process, that also underpins rulings on interracial marriage and birth control -- was principle that "allows the court to substitute its opinion for the elected representatives of the people."
lilyrashkind

Jury Awards $14M to George Floyd Protesters in Denver | Time - 0 views

  • George Floyd two years ago, ordering the city to pay a total of $14 million in damages to a group of 12 who sued. The jury of two men and six women, largely white and drawn from around Colorado, returned its verdict after about four hours of deliberations. The verdict followed three weeks of testimony and evidence that included police and protester video of incidents.
  • The protesters who sued were shot at or hit by everything from pepper spray to a Kevlar-bag filled with lead shot fired from a shotgun. Zach Packard, who was hit in the head by the shotgun blast and ended up in the intensive care unit, received the largest damage amount — $3 million.
  • One of the protesters’ lawyers, Timothy Macdonald, had urged jurors to send a message to police in Denver and elsewhere by finding the city liable during closing arguments. “Hopefully, what police departments will take from this is a jury of regular citizens takes these rights very seriously,” he said after the verdict.
  • ...4 more annotations...
  • “It feels like being seen,” Epps said. The protesters said the actions of police violated their free speech rights and rights to be protected from unreasonable force. Jurors found violations of both rights for 11 of the protesters and only free speech violations for the other. The protesters claimed Denver was liable for the police’s actions through its policies, including giving officers wide discretion in using what police call “less lethal” devices, failing to train officers on them, and not requiring them to use their body-worn cameras during the protests to deter indiscriminate uses of force.
  • She stressed that mistakes made by officers during the protests do not automatically equate to constitutional violations, noting thousands of people returned to exercise their free speech rights despite the force police used over the five days of demonstrations. “The violence and destruction that occurred around the community required intervention,” she said.
  • Aggressive responses from officers to people protesting police brutality nationally have led to financial settlements, the departures of police chiefs and criminal charges.
  • However, in 2021, a federal judge dismissed most of the claims filed by activists and civil liberties groups over the forcible removal of protesters by police before then-President Donald Trump walked to a church near the White House for a photo op.
« First ‹ Previous 181 - 191 of 191
Showing 20 items per page