Skip to main content

Home/ History Readings/ Group items tagged intuition

Rss Feed Group items tagged

Javier E

On Trump, Keep it Simple (In 5 Points) - 0 views

  • at this point we know Trump quite well.
  • 1: Trump is a Damaged Personality: Trump is an impulsive narcissist who is easily bored and driven mainly by the desire to chalk up 'wins' which drive the affirmation and praise which are his chief need and drive. He needs to dominate everyone around him and is profoundly susceptible to ego injuries tied to not 'winning', not being the best, not being sufficiently praised and acclaimed, etc. All of this drives a confrontational style and high levels of organizational chaos and drama.
  • 2: Trump is a Great Communicator: Trump has an intuitive and profound grasp of a certain kind of branding. It's not sophisticated. But mass branding seldom is. It is intuitive, even primal. 'Make America Great Again' may be awful and retrograde in all its various meanings. But it captured in myriad ways almost every demand, fear and grievance that motivated the Americans who eventually became the Trump base
  • ...7 more annotations...
  • Despite his manic temperament, impulsiveness and emotional infantility, this acumen gives him real and in some ways profound communication skills. The two don't cancel each other out. They are both always present. They grow from the same root.
  • Despite all their differences, Trump meets his voters in a common perception (real or not) of being shunned, ignored and disrespected by 'elites'. In short, his politics and his connection with his core voters is based on grievance. This is a profound and enduring connection. This part of his constituency likely amounts to only 25% or 30% of the electorate at most. But it is a powerful anchor on the right.
  • the greatest single explanation of Trump is that his politics profoundly galvanized a minority of the electorate and only a minority of the electorate. Almost everyone who wasn't galvanized was repulsed. But once he had secured the GOP nomination with that minority, the power of partisan polarization kicked in to lock into place perhaps the next 15% to 20% of the electorate which otherwise would never have supported him.
  • As long as Trump remains "us" to Republican voters I see little reason to think anything we can imagine will shake that very high level of support he gets from self-identified Republicans. That likely means that, among other things, no matter how unpopular Trump gets, Republican lawmakers will continue to support him because the chances of ending their careers is greater in a GOP primary than in a general election
  • if Trump's ideology is fluid, he has drawn around him advisors who can only be termed extremists. I believe the chief reason is that Trump's authoritarian personality resonates with extremist politics and vice versa. We should expect them to keep catalyzing each other in dangerous and frightening ways.
  • What does all this mean? We should not think in terms of counter-intuition or 12 dimensional chess. Trump wants to be President and he wants to win and be the best. But he is generally unpopular, has a policy agenda which has great difficulty achieving majority support and a temperament which makes effective governance profoundly difficult.
  • That mix makes the praise and affirmation he craves as President extremely challenging to achieve. Like many with similar temperaments and personalities he has a chronic need to generate drama and confrontation to stabilize himself. It's that simple. It won't change. It won't get better.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

How Technology Wrecks the Middle Class - NYTimes.com - 0 views

  • the productivity of American workers — those lucky enough to have jobs — has risen smartly
  • the United States still has two million fewer jobs than before the downturn, the unemployment rate is stuck at levels not seen since the early 1990s and the proportion of adults who are working is four percentage points off its peak in 2000.
  • Do “smart machines” threaten us with “long-term misery,” as the economists Jeffrey D. Sachs and Laurence J. Kotlikoff prophesied earlier this year?
  • ...17 more annotations...
  • Economists have historically rejected what we call the “lump of labor” fallacy: the supposition that an increase in labor productivity inevitably reduces employment because there is only a finite amount of work to do. While intuitively appealing, this idea is demonstrably false.
  • Labor-saving technological change necessarily displaces workers performing certain tasks — that’s where the gains in productivity come from — but over the long run, it generates new products and services that raise national income and increase the overall demand for labor.
  • The multi-trillionfold decline in the cost of computing since the 1970s has created enormous incentives for employers to substitute increasingly cheap and capable computers for expensive labor.
  • Computers excel at “routine” tasks: organizing, storing, retrieving and manipulating information, or executing exactly defined physical movements in production processes. These tasks are most pervasive in middle-skill jobs
  • Logically, computerization has reduced the demand for these jobs, but it has boosted demand for workers who perform “nonroutine” tasks that complement the automated activities
  • At one end are so-called abstract tasks that require problem-solving, intuition, persuasion and creativity.
  • On the other end are so-called manual tasks, which require situational adaptability, visual and language recognition, and in-person interaction.
  • Computerization has therefore fostered a polarization of employment, with job growth concentrated in both the highest- and lowest-paid occupations, while jobs in the middle have declined.
  • overall employment rates have largely been unaffected in states and cities undergoing this rapid polarization.
  • So computerization is not reducing the quantity of jobs, but rather degrading the quality of jobs for a significant subset of workers. Demand for highly educated workers who excel in abstract tasks is robust, but the middle of the labor market, where the routine task-intensive jobs lie, is sagging.
  • Spurred by growing demand for workers performing abstract job tasks, the payoff for college and professional degrees has soared; despite its formidable price tag, higher education has perhaps never been a better investment.
  • The good news, however, is that middle-education, middle-wage jobs are not slated to disappear completely. While many middle-skill jobs are susceptible to automation, others demand a mixture of tasks that take advantage of human flexibility
  • we predict that the middle-skill jobs that survive will combine routine technical tasks with abstract and manual tasks in which workers have a comparative advantage — interpersonal interaction, adaptability and problem-solving.
  • this category includes numerous jobs for people in the skilled trades and repair: plumbers; builders; electricians; heating, ventilation and air-conditioning installers; automotive technicians; customer-service representatives; and even clerical workers who are required to do more than type and file
  • Lawrence F. Katz, a labor economist at Harvard, memorably called those who fruitfully combine the foundational skills of a high school education with specific vocational skills the “new artisans.”
  • The outlook for workers who haven’t finished college is uncertain, but not devoid of hope. There will be job opportunities in middle-skill jobs, but not in the traditional blue-collar production and white-collar office jobs of the past
  • we expect to see growing employment among the ranks of the “new artisans”: licensed practical nurses and medical assistants; teachers, tutors and learning guides at all educational levels; kitchen designers, construction supervisors and skilled tradespeople of every variety; expert repair and support technicians; and the many people who offer personal training and assistance, like physical therapists, personal trainers, coaches and guides
Javier E

Jordan Peterson's Gospel of Masculinity | The New Yorker - 0 views

  • his accent and vocabulary combine to make him seem like a man out of time and out of place, especially in America.
  • His central message is a thoroughgoing critique of modern liberal culture, which he views as suicidal in its eagerness to upend age-old verities.
  • a possibly spurious quote that nevertheless captures his style and his substance: “Sort yourself out, bucko.”
  • ...41 more annotations...
  • His fame grew in 2016, during the debate over a Canadian bill known as C-16. The bill sought to expand human-rights law by adding “gender identity and gender expression” to the list of grounds upon which discrimination is prohibited. In a series of videotaped lectures, Peterson argued that such a law could be a serious infringement of free speech
  • His main focus was the issue of pronouns: many transgender or gender-nonbinary people use pronouns different from the ones they were assigned at birth—including, sometimes, “they,” in the singular, or nontraditional ones, like “ze.” The Ontario Human Rights Commission had found that, in a workplace or a school, “refusing to refer to a trans person by their chosen name and a personal pronoun that matches their gender identity” would probably be considered discrimination.
  • Peterson resented the idea that the government might force him to use what he called neologisms of politically correct “authoritarians.”
  • To many people disturbed by reports of intolerant radicals on campus, Peterson was a rallying figure: a fearsomely self-assured debater, unintimidated by liberal condemnation.
  • He remains a psychology professor by trade, and he still spends much of his time doing something like therapy. Anyone in need of his counsel can find plenty of it in “12 Rules for Life.”
  • One of his many fans is PewDiePie, a Swedish video gamer who is known as the most widely viewed YouTube personality in the world—his channel has more than sixty million subscribers
  • In a video review of “12 Rules for Life,” PewDiePie confessed that the book had surprised him. “It’s a self-help book!” he said. “I don’t think I ever would have read a self-help book.” (He nonetheless declared that Peterson’s book, at least the parts he read, was “very interesting.”)
  • Political polemic plays a relatively small role; Peterson’s goal is less to help his readers change the world than to help them find a stable place within it. One of his most compelling maxims is strikingly modest: “You should do what other people do, unless you have a very good reason not to.”
  • Of course, he is famous today precisely because he has determined that, in a range of circumstances, there are good reasons to buck the popular tide.
  • He is, by turns, a defender of conformity and a critic of it, and he thinks that if readers pay close attention, they, too, can learn when to be which.
  • “I stopped attending church, and joined the modern world.” He turned first to socialism and then to political science, seeking an explanation for “the general social and political insanity and evil of the world,” and each time finding himself unsatisfied.
  • The question was, he decided, a psychological one, so he sought psychological answers, and eventually earned a Ph.D. from McGill University, having written a thesis examining the heritability of alcoholism.
  • In “Maps of Meaning,” Peterson drew from Jung, and from evolutionary psychology: he wanted to show that modern culture is “natural,” having evolved over hundreds of thousands of years to reflect and meet our human needs.
  • Then, rather audaciously, he sought to explain exactly how our minds work, illustrating his theory with elaborate geometric diagrams
  • In “Maps of Meaning,” he traced this sense of urgency to a feeling of fraudulence that overcame him in college. When he started to speak, he would hear a voice telling him, “You don’t believe that. That isn’t true.” To ward off mental breakdown, he resolved not to say anything unless he was sure he believed it; this practice calmed the inner voice, and in time it shaped his rhetorical style, which is forceful but careful.
  • “You have to listen very carefully and tell the truth if you are going to get a paranoid person to open up to you,” he writes. Peterson seems to have found that this approach works on much of the general population, too.
  • He is particularly concerned about boys and men, and he flatters them with regular doses of tough love. “Boys are suffering in the modern world,” he writes, and he suggests that the problem is that they’re not boyish enough. Near the end of the chapter, he tries to coin a new catchphrase: “Toughen up, you weasel.”
  • his tone is more pragmatic in this book, and some of his critics might be surprised to find much of the advice he offers unobjectionable, if old-fashioned: he wants young men to be better fathers, better husbands, better community members.
  • Where the pickup artists promised to make guys better sexual salesmen (sexual consummation was called “full close,” as in closing a deal), Peterson, more ambitious, promises to help them get married and stay married. “You have to scour your psyche,” he tells them. “You have to clean the damned thing up.
  • When he claims to have identified “the culminating ethic of the canon of the West,” one might brace for provocation. But what follows, instead, is prescription so canonical that it seems self-evident: “Attend to the day, but aim at the highest good.” In urging men to overachieve, he is also urging them to fit in, and become productive members of Western society.
  • Every so often, Peterson pauses to remind his readers how lucky they are. “The highly functional infrastructure that surrounds us, particularly in the West,” he writes, “is a gift from our ancestors: the comparatively uncorrupt political and economic systems, the technology, the wealth, the lifespan, the freedom, the luxury, and the opportunity.”
  • Peterson seems to view Trump, by contrast, as a symptom of modern problems, rather than a cause of them. He suggests that Trump’s rise was unfortunate but inevitable—“part of the same process,” he writes, as the rise of “far-right” politicians in Europe. “If men are pushed too hard to feminize,” he warns, “they will become more and more interested in harsh, fascist political ideology.”
  • Peterson sometimes asks audiences to view him as an alternative to political excesses on both sides. During an interview on BBC Radio 5, he said, “I’ve had thousands of letters from people who were tempted by the blandishments of the radical right, who’ve moved towards the reasonable center as a consequence of watching my videos.”
  • But he typically sees liberals, or leftists, or “postmodernists,” as aggressors—which leads him, rather ironically, to frame some of those on the “radical right” as victims. Many of his political stances are built on this type of inversion.
  • Postmodernists, he says, are obsessed with the idea of oppression, and, by waging war on oppressors real and imagined, they become oppressors themselves. Liberals, he says, are always talking about the importance of compassion—and yet “there’s nothing more horrible for children, and developing people, than an excess of compassion.”
  • The danger, it seems, is that those who want to improve Western society may end up destroying it.
  • But Peterson remains a figurehead for the movement to block or curtail transgender rights. When he lampoons “made-up pronouns,” he sometimes seems to be lampooning the people who use them, encouraging his fans to view transgender or gender-nonbinary people as confused, or deluded
  • Once, after a lecture, he was approached on campus by a critic who wanted to know why he would not use nonbinary pronouns. “I don’t believe that using your pronouns will do you any good, in the long run,” he replied.
  • In a debate about gender on Canadian television, in 2016, he tried to find some middle ground. “If our society comes to some sort of consensus over the next while about how we’ll solve the pronoun problem,” he said, “and that becomes part of popular parlance, and it seems to solve the problem properly, without sacrificing the distinction between singular and plural, and without requiring me to memorize an impossible list of an indefinite number of pronouns, then I would be willing to reconsider my position.
  • Despite his fondness for moral absolutes, Peterson is something of a relativist; he is inclined to defer to a Western society that is changing in unpredictable ways
  • Peterson excels at explaining why we should be careful about social change, but not at helping us assess which changes we should favor; just about any modern human arrangement could be portrayed as a radical deviation from what came before.
  • In the case of gender identity, Peterson’s judgment is that “our society” has not yet agreed to adopt nontraditional pronouns, which isn’t quite an argument that we shouldn’t.
  • Peterson—like his hero, Jung—has a complicated relationship to religious belief. He reveres the Bible for its stories, reasoning that any stories that we have been telling ourselves for so long must be, in some important sense, true.
  • In a recent podcast interview, he mentioned that people sometimes ask him if he believes in God. “I don’t respond well to that question,” he said. “The answer to that question is forty hours long, and I can’t condense it into a sentence.”
  • At times, Peterson emphasizes his interest in empirical knowledge and scientific research—although these tend to be the least convincing parts of “12 Rules for Life.”
  • Peterson’s story about the lobster is essentially a modern myth. He wants forlorn readers to imagine themselves as heroic lobsters; he wants an image of claws to appear in their mind whenever they feel themselves start to slump; he wants to help them.
  • Peterson wants to help everyone, in fact. In his least measured moments, he permits himself to dream of a world transformed. “Who knows,” he writes, “what existence might be like if we all decided to strive for the best?
  • His many years of study fostered in him a conviction that good and evil exist, and that we can discern them without recourse to any particular religious authority. This is a reassuring belief, especially in confusing times: “Each human being understands, a priori, perhaps not what is good, but certainly what is not.
  • there are therapists and life coaches all over the world dispensing some version of this formula, nudging their clients to pursue lives that better conform to their own moral intuitions. The problem is that, when it comes to the question of how to order our societies—when it comes, in other words, to politics—our intuitions have proved neither reliable nor coherent.
  • The “highly functional infrastructure” he praises is the product of an unceasing argument over what is good, for all of us; over when to conform, and when to dissent
  • We can, most of us, sort ourselves out, or learn how to do it. That doesn’t mean we will ever agree on how to sort out everyone else.
Javier E

This Is Not a Market | Dissent Magazine - 0 views

  • Given how ordinary people use the term, it’s not surprising that academic economists are a little vague about it—but you’ll be glad to hear that they know they’re being vague. A generation of economists have criticized their colleagues’ inability to specify what a “market” actually is. George Stigler, back in 1967, thought it “a source of embarrassment that so little attention has been paid to the theory of markets.” Sociologists agree: according to Harrison White, there is no “neoclassical theory of the market—[only] a pure theory of exchange.” And Wayne Baker found that the idea of the market is “typically assumed—not studied” by most economists, who “implicitly characterize ‘market’ as a ‘featureless plane.’
  • When we say “market” now, we mean nothing particularly specific, and, at the same time, everything—the entire economy, of course, but also our lives in general. If you can name it, there’s a market in it: housing, education, the law, dating. Maybe even love is “just an economy based on resource scarcity.”
  • The use of markets to describe everything is odd, because talking about “markets” doesn’t even help us understand how the economy works—let alone the rest of our lives. Even though nobody seems to know what it means, we use the metaphor freely, even unthinkingly. Let the market decide. The markets are volatile. The markets responded poorly. Obvious facts—that the economy hasn’t rebounded after the recession—are hidden or ignored, because “the market” is booming, and what is the economy other than “the market”? Well, it’s lots of other things. We might see that if we talked about it a bit differently.
  • ...9 more annotations...
  • For instance, we might choose a different metaphor—like, say, the traffic system. Sounds ridiculous? No more so than the market metaphor. After all, we already talk about one important aspect of economic life in terms of traffic: online activity. We could describe it in market terms (the market demands Trump memes!), but we use a different metaphor, because it’s just intuitively more suitable. That last Trump meme is generating a lot of traffic. Redirect your attention as required.
  • We don’t know much about markets, because we don’t deal with them very often. But most of us know plenty about traffic systems: drivers will know the frustration of trying to turn left onto a major road, of ceaseless, pointless lane-switching on a stalled rush-hour freeway, but also the joys of clear highways.
  • Deciding how to improve the traffic system, how to expand people’s opportunities, is obviously a question of resource allocation and prioritization on a scale that private individuals—even traders—cannot influence on their own. That’s why government have not historically trusted the “magic of the markets” to produce better opportunities for transport. We intuitively understand that these decisions are made at the level of mass society and public policy. And, whether you like it or not, this is true for decisions about the economy as well.
  • As of birth, Jean is in the economy—even if s/he rarely goes to a market. You can’t not be an economic actor; you can’t not be part of the transport system.
  • Consider also the composition of the traffic system and the economy. A market, whatever else it is, is always essentially the same thing: a place where people can come together to buy and sell things. We could set up a market right now, with a few fences and a sign announcing that people could buy and sell. We don’t even really need the fences. A traffic system, however, is far more complex. To begin with, the system includes publicly and privately run elements: most cars are privately owned, as are most airlines
  • If we don’t evaluate traffic systems based on their size, or their growth, how do we evaluate them? Mostly, by how well they help people get where they want to go. The market metaphor encourages us to think that all economic activity is motivated by the search for profit, and pursued in the same fashion everywhere. In a market, everyone’s desires are perfectly interchangeable. But, while everybody engages in the transport system, we have no difficulty remembering that we all want to go to different places, in different ways, at different times, at different speeds, for different reasons
  • We know the traffic system because, whether we like it or not, we are always involved in it, from birth
  • Thinking of the economy in terms of the market—a featureless plane, with no entry or exit costs, little need for regulation, and equal opportunity for all—obscures this basic insight. And this underlying misconception creates a lot of problems: we’ve fetishized economic growth, we’ve come to distrust government regulation, and we imagine that the inequalities in our country, and our world, are natural or justified. If we imagine the economy otherwise—as a traffic system, for example—we see more clearly how the economy actually works.
  • We see that our economic life looks a lot less like going to “market” for fun and profit than it does sitting in traffic on our morning commute, hoping against hope that we’ll get where we want to go, and on time.
Javier E

The Coming Software Apocalypse - The Atlantic - 0 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • What made programming so difficult was that it required you to think like a computer.
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

The Trump Death Star Implodes - 0 views

  • My sense (as a younger boomer) of what drives young people’s concerns is a combination of anxiety about where we’re headed, and their beliefs regarding the absolute inadequacy of status quo institutions for navigating present and future perils.  They are done, exhausted, and completely devoid of patience and tolerance for any pleas to “let the system work”.
  • In addition, they seem more attuned to how profoundly unprepared (still, to this day) we all are for handling aggressive manipulation via social and traditional media. 
  • They seem to intuit that legacy standards of civil discourse have been corrupted and weaponized, and that some new standard is required.  Decades of bad faith arguments, predominantly from right wing media and the Republican party, whether about climate change, racism, health care, tax policy, worker protections, or social disparities, are a major factor in that disenchantment.  The traditional standards of evenhanded, open discourse have been turned against democracies around the world, at times with very negative consequences (Hungary, Poland, Chile, Venezuela, Brazil, etc.).
  • ...3 more annotations...
  • Impatience with stubborn faith in past American experience is hardly a vice in this context.  Democracy is not guaranteed, and is perhaps much more fragile than we’re comfortable admitting.  The Trump presidency is prima facie evidence these concerns are justified.
  • Perhaps there’s way more to this story than we know, and perhaps the perceived intolerance of these situations is really the early emergence of new ideas about the scope of freedom of speech, one that balances freedom of expression with social equity and the protection of democracy.
  • This is where the younger generation falls short, so far:  the rationale for their discomfort isn’t well articulated.  Consider the discord in the media the first step in a process of recognizing the problem, if only at an intuitive, emotional level, and the beginnings of (hopefully) a clearly delineated, more rational process of figuring out where the new boundaries of First Amendment rights and responsibilities lie in our lovely new age of weaponized social discourse.
Javier E

The Equality Conundrum | The New Yorker - 0 views

  • The philosopher Ronald Dworkin considered this type of parental conundrum in an essay called “What Is Equality?,” from 1981. The parents in such a family, he wrote, confront a trade-off between two worthy egalitarian goals. One goal, “equality of resources,” might be achieved by dividing the inheritance evenly, but it has the downside of failing to recognize important differences among the parties involved.
  • Another goal, “equality of welfare,” tries to take account of those differences by means of twisty calculations.
  • Take the first path, and you willfully ignore meaningful facts about your children. Take the second, and you risk dividing the inheritance both unevenly and incorrectly.
  • ...33 more annotations...
  • In 2014, the Pew Research Center asked Americans to rank the “greatest dangers in the world.” A plurality put inequality first, ahead of “religious and ethnic hatred,” nuclear weapons, and environmental degradation. And yet people don’t agree about what, exactly, “equality” means.
  • One side argues that the city should guarantee procedural equality: it should insure that all students and families are equally informed about and encouraged to study for the entrance exam. The other side argues for a more direct, representation-based form of equality: it would jettison the exam, adopting a new admissions system designed to produce student bodies reflective of the city’s demography
  • In the past year, for example, New York City residents have found themselves in a debate over the city’s élite public high schools
  • The complexities of egalitarianism are especially frustrating because inequalities are so easy to grasp. C.E.O.s, on average, make almost three hundred times what their employees make; billionaire donors shape our politics; automation favors owners over workers; urban economies grow while rural areas stagnate; the best health care goes to the richest.
  • It’s not just about money. Tocqueville, writing in 1835, noted that our “ordinary practices of life” were egalitarian, too: we behaved as if there weren’t many differences among us. Today, there are “premiere” lines for popcorn at the movies and five tiers of Uber;
  • Inequality is everywhere, and unignorable. We’ve diagnosed the disease. Why can’t we agree on a cure?
  • In a book based on those lectures, “One Another’s Equals: The Basis of Human Equality,” Waldron points out that people are also marked by differences of skill, experience, creativity, and virtue. Given such consequential differences, he asks, in what sense are people “equal”?
  • According to the Declaration of Independence, it is “self-evident” that all men are created equal. But, from a certain perspective, it’s our inequality that’s self-evident.
  • More than twenty per cent of Americans, according to a 2015 poll, agree: they believe that the statement “All men are created equal” is false.
  • In Waldron’s view, though, it’s not a binary choice; it’s possible to see people as equal and unequal simultaneously. A society can sort its members into various categories—lawful and criminal, brilliant and not—while also allowing some principle of basic equality to circumscribe its judgments and, in some contexts, override them
  • Egalitarians like Dworkin and Waldron call this principle “deep equality.” It’s because of deep equality that even those people who acquire additional, justified worth through their actions—heroes, senators, pop stars—can still be considered fundamentally no better than anyone else.
  • In the course of his search, he explores centuries of intellectual history. Many thinkers, from Cicero to Locke, have argued that our ability to reason is what makes us equals.
  • Other thinkers, including Immanuel Kant, have cited our moral sense.
  • Some philosophers, such as Jeremy Bentham, have suggested that it’s our capacity to suffer that equalizes us
  • Waldron finds none of these arguments totally persuasive.
  • In various religious traditions, he observes, equality flows not just from broad assurances that we are all made in God’s image but from some sense that everyone is the protagonist in a saga of error, realization, and redemption: we’re equal because God cares about how things turn out for each of us.
  • Waldron himself is taken by Hannah Arendt’s related concept of “natality,” the notion that what each of us share is having been born as a “newcomer,” entering into history with “the capacity of beginning something anew, that is, of acting.”
  • equality may be not a self-evident fact about human beings but a human-made social construction that we must choose to put into practice.
  • In the end, Waldron concludes that there is no “small polished unitary soul-like substance” that makes us equal; there’s only a patchwork of arguments for our deep equality, collectively compelling but individually limited.
  • Equality is a composite idea—a nexus of complementary and competing intuitions.
  • The blurry nature of equality makes it hard to solve egalitarian dilemmas from first principles. In each situation, we must feel our way forward, reconciling our conflicting intuitions about what “equal” means.
  • The communities that have the easiest time doing that tend to have some clearly defined, shared purpose. Sprinters competing in a hundred-metre dash have varied endowments and train in different conditions; from a certain perspective, those differences make every race unfair.
  • By embracing an agreed-upon theory of equality before the race, the sprinters can find collective meaning in the ranked inequalities that emerge when it ends
  • Perhaps because necessity is so demanding, our egalitarian commitments tend to rest on a different principle: luck.
  • “Some people are blessed with good luck, some are cursed with bad luck, and it is the responsibility of society—all of us regarded collectively—to alter the distribution of goods and evils that arises from the jumble of lotteries that constitutes human life as we know it.” Anderson, in an influential coinage, calls this outlook “luck egalitarianism.”
  • This sort of artisanal egalitarianism is comparatively easy to arrange. Mass-producing it is what’s hard. A whole society can’t get together in a room to hash things out. Instead, consensus must coalesce slowly around broad egalitarian principles.
  • No principle is perfect; each contains hidden dangers that emerge with time. Many people, in contemplating the division of goods, invoke the principle of necessity: the idea that our first priority should be the equal fulfillment of fundamental needs. The hidden danger here becomes apparent once we go past a certain point of subsistence.
  • a core problem that bedevils egalitarianism—what philosophers call “the problem of expensive tastes.”
  • The problem—what feels like a necessity to one person seems like a luxury to another—is familiar to anyone who’s argued with a foodie spouse or roommate about the grocery bil
  • The problem is so insistent that a whole body of political philosophy—“prioritarianism”—is devoted to the challenge of sorting people with needs from people with wants
  • the line shifts as the years pass. Medical procedures that seem optional today become necessities tomorrow; educational attainments that were once unusual, such as college degrees, become increasingly indispensable with time
  • Some thinkers try to tame the problem of expensive tastes by asking what a “normal” or “typical” person might find necessary. But it’s easy to define “typical” too narrowly, letting unfair assumptions influence our judgment
  • an odd feature of our social contract: if you’re fired from your job, unemployment benefits help keep you afloat, while if you stop working to have a child you must deal with the loss of income yourself. This contradiction, she writes, reveals an assumption that “the desire to procreate is just another expensive taste”; it reflects, she argues, the sexist presumption that “atomistic egoism and self-sufficiency” are the human norm. The word “necessity” suggests the idea of a bare minimum. In fact, it sets a high bar. Clearing it may require rethinking how society functions.
Javier E

It's not just vibes. Americans' perception of the economy has completely changed. - ABC... - 0 views

  • Applying the same pre-pandemic model to consumer sentiment during and after the pandemic, however, simply does not work. The indicators that correlated with people's feelings about the economy before 2020 no longer seem to matter in the same way
  • As with so many areas of American life, the pandemic has changed virtually everything about how people think about the economy and the issues that concern them
  • Prior to the pandemic, our model shows consumers felt better about the economy when the personal savings rate, a measure of how much money households are able to save rather than spend each month, was higher. This makes sense: People feel better when they have money in the bank and are able to save for important purchases like cars and houses.
  • ...20 more annotations...
  • Before the pandemic, a number of variables were statistically significant indicators for consumer sentiment in our model; in particular, the most salient variables appear to be vehicle sales, gas prices, median household income, the federal funds effective rate, personal savings and household expenditures (excluding food and energy).
  • During the pandemic, the personal savings rate soared. In April 2020, the metric was nearly double its previous high, recorded in May 1975.
  • All this taken together meant Americans were flush with cash but had nowhere to spend it. So despite the fact that the savings rate went way up, consumers still weren't feeling positively about the economy — contrary to the relationship between these two variables we saw in the decades before the pandemic.
  • Fast forward to 2024, and the personal savings rate has dropped to one of its lowest levels ever (the only time the savings rate was lower was in the years surrounding the Great Recession)
  • during and after the pandemic, Americans saw some of the highest rates of inflation the country has had in decades, and in a very short period of time. These sudden spikes naturally shocked many people who had been blissfully enjoying slow, steady price growth their entire adult lives. And it has taken a while for that shock to wear off, even as inflation has cre
  • the numbers align with our intuitive sense of how consumers process suddenly having their grocery store bill jump, as well as the findings from our model. In simple terms: Even if inflation is getting better, Americans aren't done being ticked off that it was bad to begin with.
  • surprisingly, our pre-pandemic model didn't find a notable relationship between housing prices and consumer sentiment
  • However, in our post-pandemic data, when we examined how correlated consumer sentiment was with each indicator we considered, consumer sentiment and median housing prices had the strongest correlation of all****** (a negative one, meaning higher prices were associated with lower consumer sentiment)
  • during the pandemic, low interest rates, high savings rates and changes in working patterns — namely, many workers' newfound ability to work from home — helped overheat the homebuying market, and buyers ran headlong into an enduring supply shortage. There simply weren't enough houses to buy, which drove up the costs of the ones that were for sale.
  • That's true even if a family has been able to save enough for a down payment, already a difficult task when rents remain high as well. Fewer people are able to cover their current housing costs while saving enough to make a down payment.
  • Low-income households are still the most likely to be burdened with high rents, but they're not the only ones affected anymore. High rents have also begun to affect those at middle-income levels as well.
  • In short, there was already a housing affordability crisis before the pandemic. Now it's worse, locking a wider array of people, at higher and higher income levels, out of the home-buying market
  • People who are renting but want to buy are stuck. People who live in starter homes and want to move to bigger homes are stuck. The conditions have frustrated a fundamental element of the American dream
  • In our pre-pandemic model, total vehicle sales had a strong positive relationship with consumer sentiment: If people were buying cars, you could pretty reasonably bet that they felt good about the economy. This feels intuitive — who buys a car if they think the economy
  • Cox Automotive also tracks vehicle affordability by calculating the estimated number of weeks' worth of median income needed to purchase the average new vehicle, and while that number has improved over the last two years, it remains high compared to pre-pandemic levels. In April, the most recent month with data, it took 37.7 weeks of median income to purchase a car, compared with fewer than 35 weeks at the end of 2019.
  • "Right before the pandemic, the typical average transaction price was around $38,000 for a new car. By 2023, it was $48,000," Schirmer said. This could all be contributing to the break in the relationship between car sales and sentiment, he noted. Basically, people might be buying cars, but they aren't necessarily happy about it.
  • Inspired by our model of economic indicators and sentiment from 1987 to 2019, we tried to train a similar linear regression model on the same data from 2021 to 2024 to more directly compare how things changed after the pandemic. While we were able to get a pretty good fit for this post-pandemic model,******* something interesting happened: Not a single variable showed up as a statistically significant predictor of consumer sentiment.
  • This suggests there's something much more complicated going on behind the scenes: Interactions between these variables are probably driving the prediction, and there's too much noise in this small post-pandemic data set for the model to disentangle i
  • Changes in the kinds of purchases we've discussed — homes, cars and everyday items like groceries — have fundamentally shifted the way Americans view how affordable their lives are and how they measure their quality of life.
  • Even though some indicators may be improving, Americans are simply weighing the factors differently than they used to, and that gives folks more than enough reason to have the economic blues.
Javier E

The Secrets of Princeton - NYTimes.com - 0 views

  • a truth that everyone who’s come up through Ivy League culture knows intuitively — that elite universities are about connecting more than learning, that the social world matters far more than the classroom to undergraduates, and that rather than an escalator elevating the best and brightest from every walk of life, the meritocracy as we know it mostly works to perpetuate the existing upper class.
  • Every elite seeks its own perpetuation, of course, but that project is uniquely difficult in a society that’s formally democratic and egalitarian and colorblind. And it’s even more difficult for an elite that prides itself on its progressive politics, its social conscience, its enlightened distance from hierarchies of blood and birth and breeding.
  • The intermarriage of elite collegians is only one of these mechanisms — but it’s an enormously important one.
  • ...7 more annotations...
  • Of course Ivy League schools double as dating services. Of course members of elites — yes, gender egalitarians, the males as well as the females — have strong incentives to marry one another, or at the very least find a spouse from within the wider meritocratic circle. What better way to double down on our pre-existing advantages?
  • That this “assortative mating,” in which the best-educated Americans increasingly marry one another, also ends up perpetuating existing inequalities seems blindingly obvious, which is no doubt why it’s considered embarrassing and reactionary to talk about it too overtly.
  • it would be like telling elite collegians that they should all move to similar cities and neighborhoods, surround themselves with their kinds of people and gradually price everybody else out of the places where social capital is built, influence exerted and great careers made. No need — that’s what we’re already doing!
  • Or it would be like telling admissions offices at elite schools that they should seek a form of student-body “diversity” that’s mostly cosmetic, designed to flatter multicultural sensibilities without threatening existing hierarchies all that much. They don’t need to be told — that’s how the system already works!
  • The result is an upper class that looks superficially like America, but mostly reproduces the previous generation’s elite.
  • But don’t come out and say it! Next people will start wondering why the names in the U.S. News rankings change so little from decade to decade. Or why the American population gets bigger and bigger, but our richest universities admit the same size classes every year, Or why in a country of 300 million people and countless universities, we can’t seem to elect a president or nominate a Supreme Court justice who doesn’t have a Harvard or Yale degree.
  • That the actual practice of meritocracy mostly involves a strenuous quest to avoid any kind of downward mobility, for oneself or for one’s kids, is something every upper-class American understands deep in his or her highly educated bones.
Javier E

Class Struggle in the Sky - NYTimes.com - 0 views

  • Statusization — to coin a useful term — is ubiquitous, no matter what your altitude. While you’re in your hospital bed spooning up red Jell-O, a patient in a private suite is enjoying strawberries and cream. On your way to a Chase A.T.M., you notice a silver plaque declaring the existence within of Private Client Services. This man has a box seat at a Yankees game; that man has a skybox. And the skybox isn’t the limit: high overhead, the 1 percent fly first class; the .1 percent fly Netjets; the .01 fly their own planes. Why should it be any different up above from down below?
  • In his new book, “The Great Degeneration,” the historian Niall Ferguson confirms my intuition. His argument is that we’ve seen a precipitous decline in social mobility over the last 30 years: “Once the United States was famed as a land of opportunity, where a family could leap from ‘rags to riches’ in a generation.
  • flying has become like driving — only instead of collapsing bridges and potholed roads, the hazards a traveler in economy faces are crippling back pain and plastic-wrapped ham sandwiches tossed on a tray by hassled flight attendants. It’s just another infrastructure in collapse.
Javier E

Confessions of a Columnist - The New York Times - 0 views

  • a year ago, I imagined that conservatism was sclerotic but ideologically committed, and that liberalism was wrong about the world but pretty good at fearmongering and voter targeting. But my intellect and experience were wrong, and Trump’s Napoleonic intuitions were correct: The Republicans were all low-energy men underneath, and the liberal elites were as vulnerable to him as the Cameron Tories and Blairites were to Brexit.
Javier E

History News Network | "We Find the Republican Party Busily Chewing on Itself" - 0 views

  • There is no precise historical comparison to this moment, but as Graham suggests, the closest may be 1954, when the Republican Party publicly struggled with how to deal with Joseph McCarthy.
  • In both cases, utterly unscrupulous men intuited the potential of the moment and seized an opportunity to capitalize on it. “Respectable” Republicans in the early 1950s looked down their noses at McCarthy, but as long as he was attacking Harry Truman and the Democrats, they were more than happy to have him do their dirty work. It was only when McCarthy’s witch hunt continued into the Eisenhower administration and he attacked the U.S. Army that he became intolerable. Then as now, for many Republicans, it was simply a matter of tone rather than substance, since their own policy prescriptions were not so substantively different from the demagogue’s.
  • What distinguishes Trump from McCarthy is that the latter was ultimately dependent on the party. When Republicans turned on him in 1954, he was finished as a political force. Trump has never been dependent on the party. Now he is taking it over.
  • ...2 more annotations...
  • Anything less than a forthright repudiation of Trump will mark every Republican officeholder with his dangerous bigotry, but it will do even more than that: it will brand the entire Republican Party with that bigotry.
  • those comments do represent Donald Trump and what he thinks. He has told us repeatedly who he is. It’s unmistakable. It’s time. Choose.
Javier E

Farhad and Mike Discuss the Apple Case and a Go-Playing Computer Program - The New York... - 0 views

  • The program is a blend of deep learning and Monte Carlo algorithms, meaning it is both good at recognizing patterns and has the ability to exhaustively search vast libraries of possible moves.
  • the timetable for computing dominance of Go has been moved up roughly a decade from when it had been expected. That’s largely because the new ability to blend pattern recognition algorithms and vast data sets has been yielding spectacular results in the last half-decade. It’s like computer scientists have found a powerful new hammer, and they’re using it to pound lots of different nails
  • The Google program combines two types of algorithms. One is a machine learning algorithm, which does an extremely good job of recognizing patterns based on being trained on a vast set of examples. So it is likely to have seen almost any move that a human could make, and also know which responses are better ones.
  • ...1 more annotation...
  • A second type of algorithm can also see the consequences of particular moves far, far in advance of the game by playing millions and millions or perhaps even billions of combinations of moves. In contrast, human Go experts have their experience to rely on, but it is fuzzy by comparison. Think of this as an intellectual version of John Henry and the jackhammer.
Javier E

'Hamilton' and 'This Is For My Girls': Examples of How the Obamas Have Used Fame Wisely... - 0 views

  • Gopnik argued that Miranda had completed a long-brewing transition in historical interpretation among liberals from aligning themselves with Thomas Jefferson to aligning themselves with Hamilton:
  • ... a Hamiltonian liberal is an ex-revolutionary who believes that the small, detailed procedural efforts of the federal government to seed and promote prosperity are the ideal use of the executive role. Triumphs of this kind, as the show demonstrates, are so subtle and manifold as to often be largely invisible—and are most often as baffling and infuriating to those whom the change is designed to serve as they are to those whom the compromises are meant to placate.
  • Gopnik suggested that Miranda ended up offering this Obama-friendly message less by design than by intuition
  • ...2 more annotations...
  • If you look around at the major politically themed works of culture in the past seven years, you often find a similar focus on process, competence, and incremental change for the common good. Steven Spielberg’s Lincoln showed one of the most idealized presidents in bargaining mode; Zero Dark Thirty portrayed the defeat of Osama bin Laden as the result of scut work; even the cynical House of Cards has fun imagining a Democratic president murdering ideology in pursuit of concrete policy achievements.
  • If there’s anything that underlines the idea of hip-hop as a newly universal language, it’s Hamilton—both the musical itself and its conquest of the Great White Way and the White House. To say Obama is responsible for the wider shift in America that has enabled Hamilton’s success would be incorrect, of course. He is a beneficiary of that shift, and he has in turn, subtly, helped it progress further.
Javier E

James Q. Wilson Dies at 80 - Originated 'Broken Windows' Policing Strategy - NYTimes.com - 0 views

  • his most influential theory holds that when the police emphasize the maintenance of order rather than the piecemeal pursuit of rapists, murderers and carjackers, concentrating on less threatening though often illegal disturbances in the fabric of urban life like street-corner drug-dealing, graffiti and subway turnstile-jumping, the rate of more serious crime goes down.
  • The approach is psychologically based. It proceeds from the presumption, supported by research, that residents’ perceptions of the safety of their neighborhood is based not on whether there is a high rate of crime, but on whether the neighborhood appears to be well tended — that is, whether its residents hold it in mutual regard, uphold the locally accepted obligations of civility, and outwardly disdain the flouting of those obligations.
  • acts of criminality are fostered by such an “untended” environment, and that the solution is thus to tend it by being intolerant of the smallest illegalities. The wish “to ‘decriminalize’ disreputable behavior that ‘harms no one’ — and thus remove the ultimate sanction the police can employ to maintain neighborhood order — is, we think, a mistake,” Mr. Wilson and Mr. Kelling wrote. “Arresting a single drunk or a single vagrant who has harmed no identifiable person seems unjust, and in a sense it is. But failing to do anything about a score of drunks or a hundred vagrants may destroy an entire community.”
  • ...3 more annotations...
  • when a window is broken and someone fixes it, that is a sign that disorder will not be tolerated. But “one unrepaired broken window,” they wrote, “is a signal that no one cares, and so breaking more windows costs nothing.”
  • “The importance of what Wilson and Kelling wrote was the emphasis not only on crime committed against people but the emphasis on crimes committed against the community, neighborhoods,”
  • “I know my political ideas affect what I write,” he acknowledged in a 1998 interview in The Times, “but I’ve tried to follow the facts wherever they land. Every topic I’ve written about begins as a question. How do police departments behave? Why do bureaucracies function the way they do? What moral intuitions do people have? How do courts make their decisions? What do blacks want from the political system? I can honestly say I didn’t know the answers to those questions when I began looking into them.”
Javier E

It All Turns on Affection-Wendell E. Berry Lecture | National Endowment for the Humanities - 0 views

  • Wallace Stegner. He thought rightly that we Americans, by inclination at least, have been divided into two kinds: “boomers” and “stickers.” Boomers, he said, are “those who pillage and run,” who want “to make a killing and end up on Easy Street,” whereas stickers are “those who settle, and love the life they have made and the place they have made it in.”2 “Boomer” names a kind of person and a kind of ambition that is the major theme, so far, of the history of the European races in our country. “Sticker” names a kind of person and also a desire that is, so far, a minor theme of that history, but a theme persistent enough to remain significant and to offer, still, a significant hope.
  • We may, as we say, “know” statistical sums, but we cannot imagine them. It is by imagination that knowledge is “carried to the heart” (to borrow again from Allen Tate).5 The faculties of the mind—reason, memory, feeling, intuition, imagination, and the rest—are not distinct from one another. Though some may be favored over others and some ignored, none functions alone. But the human mind, even in its wholeness, even in instances of greatest genius, is irremediably limited. Its several faculties, when we try to use them separately or specialize them, are even more limited.
  • The fact is that we humans are not much to be trusted with what I am calling statistical knowledge, and the larger the statistical quantities the less we are to be trusted. We don’t learn much from big numbers. We don’t understand them very well, and we aren’t much affected by them. The reality that is responsibly manageable by human intelligence is much nearer in scale to a small rural community or urban neighborhood than to the “globe.”
  • ...3 more annotations...
  • Propriety of scale in all human undertakings is paramount, and we ignore it. We are now betting our lives on quantities that far exceed all our powers of comprehension. We believe that we have built a perhaps limitless power of comprehension into computers and other machines, but our minds remain as limited as ever. Our trust that machines can manipulate to humane effect quantities that are unintelligible and unimaginable to humans is incorrigibly strange.
  • We cannot know the whole truth, which belongs to God alone, but our task nevertheless is to seek to know what is true. And if we offend gravely enough against what we know to be true, as by failing badly enough to deal affectionately and responsibly with our land and our neighbors, truth will retaliate with ugliness, poverty, and disease. The crisis of this line of thought is the realization that we are at once limited and unendingly responsible for what we know and do.
  • It is a horrible fact that we can read in the daily paper, without interrupting our breakfast, numerical reckonings of death and destruction that ought to break our hearts or scare us out of our wits. This brings us to an entirely practical question:  Can we—and, if we can, how can we—make actual in our minds the sometimes urgent things we say we know? This obviously cannot be accomplished by a technological breakthrough, nor can it be accomplished by a big thought. Perhaps it cannot be accomplished at all.
Javier E

The Leadership Emotions - NYTimes.com - 0 views

  • Under their influence the distinction between campaigning and governing has faded away. Most important, certain faculties that were central to amateur decision making — experience, intuition, affection, moral sentiments, imagination and genuineness — have been shorn down for those traits that we associate with professional tactics and strategy — public opinion analysis, message control, media management and self-conscious positioning.
  • Edmund Burke once wrote, “The true lawgiver ought to have a heart full of sensibility. He ought to love and respect his kind, and to fear himself.”
  • Burke was emphasizing that leadership is a passionate activity. It begins with a warm gratitude toward that which you have inherited and a fervent wish to steward it well. It is propelled by an ardent moral imagination, a vision of a good society that can’t be realized in one lifetime. It is informed by seasoned affections, a love of the way certain people concretely are and a desire to give all a chance to live at their highest level.
  • ...1 more annotation...
  • This kind of leader is warm-blooded and leads with full humanity. In every White House, and in many private offices, there seems to be a tug of war between those who want to express this messy amateur humanism and those calculators who emphasize message discipline
Javier E

Thomas Piketty and His Critics - NYTimes.com - 0 views

  • both optimists and pessimists share a belief more telling than Piketty’s success: the idea that the traditional Democratic economic agenda is dead.
  • Piketty’s book reinforces the idea that the domestic policies liberals advocate for are palliative, not curative — that, in essence, inequality is here to stay.
  • “for countries at the world technological frontier” — the United States, northern Europe and parts of Asia — and “ultimately for the planet as a whole – there is ample reason to believe that the growth rate will not exceed 1-1.5 percent in the long run, no matter what economic policies are adopted.”
  • ...10 more annotations...
  • Piketty’s analysis articulates what many people on the Democratic left feel intuitively, that a domestic tax, spending and regulatory agenda is ineffective in the face of the power of globalized capital to grind down wages and benefits.
  • Rogoff views evidence of growing inequality presented by Piketty and others as “persuasive” and he proposes a number of alternative, smaller-scale remedies to control disproportionate wealth accumulation. He suggests a shift to a “relatively flat consumption tax, with a large deductible for progressivity.”
  • “absent aggressive policy intervention, the Western world appears to be headed toward a plutocratic dystopia characterized by wealth inequality approaching that of ancien régime France.”
  • Baker wrote that “a big part of the appeal is that it allows people to say capitalism is awful but there is nothing that we can do about it.”
  • Piketty’s proposed global tax would set rates of 0.1 to 0.5 percent on fortunes of less than 1 million euros ($1.37 million); 1 percent on assets of 1 to 5 million euros ($1.37 million to $6.87 million); 2 percent on holdings of 5 to 10 million euros ($6.87 million to $13.7 million); and a sliding scale ultimately reaching 10 percent on fortunes of “several hundred million or several billion euros.”
  • Why, Rogoff asks, should we “try to move to an improbable global wealth tax when alternatives are available that are growth friendly, raise significant revenue, and can be made progressive through a very high exemption”?
  • Rogoff cites a series of suggestions developed by Jeffrey Frankel, a professor at the Kennedy School at Harvard. These include “the elimination of payroll taxes for low-income workers, a cut in deductions for high-income workers, and higher inheritance taxes.”
  • In other words, centrists like Rogoff and Crook – in addition to liberals determined to assault bastions of privilege — are beginning to take proposals to restrain the growing concentration of wealth seriously.
  • Both the shift of attention to wealth and the seriousness with which a proposal to constrain the accumulation of wealth is being taken represent a major change in the contemporary debate over inequality. Few Americans appear to begrudge the multimillion dollar annual compensation of entrepreneurial executives like Steve Jobs or Bill Gates. But inherited and unearned wealth does not command the same legitimacy.
  • In fact, the emergence of what Piketty calls “patrimonial capitalism” — concentrated wealth and political power passed on from generation to generation in a class-based social order — runs directly counter to the longstanding American commitment to equality of opportunity. Piketty has laid the intellectual groundwork for a challenge to a social and political order based on socioeconomic ranking by wealth stratification.
1 - 20 of 86 Next › Last »
Showing 20 items per page