Skip to main content

Home/ History Readings/ Group items tagged trains

Rss Feed Group items tagged

Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
anonymous

The Secret 'White Trains' That Carried Nuclear Weapons Around the U.S. - HISTORY - 0 views

  • Since the 1950s, this team of federal agents, most of them ex-military, has been tasked with ferrying America’s 6,800 nuclear warheads and extensive supply of nuclear materials across the roads and highways of the United States. America’s nuclear facilities are spread out throughout the country, on over 2.4 million acres of federal real estate, overseen by the Department of Energy (DOE)—a labyrinth of a system the Bulletin of the Atomic Scientists called “highly scattered and fragmented…with few enforceable rules.”
  • For as long as the United States has had nuclear weapons, it has struggled with the question of how to transport America’s most destructive technology throughout the country without incident. “It’s the weak link in the chain of nuclear security,” said Dr. Edwin Lyman of the Union of Concerned Scientists.
  • Today the United States relies almost entirely on million-dollar, Lockheed Martin tractor-trailers, known as Safeguard Transporters (SGTs) and Safe Secure Trailers (SSTs) to move nuclear material. But from the 1950s through the 1980s, the great hope for safe transit was so-called “white trains.”  
  • ...6 more annotations...
  • Amarillo was the final destination for almost all of America’s nuclear trains and the Pantex Plant was the nation’s only assembly point for nuclear weapons, a role it maintains to this day.
  • Each day trucks and trains rolled in, carrying plutonium from Georgia and Washington, bomb triggers from Colorado, uranium from Tennessee and neutron generators from Florida. They rolled out on white trains, carrying fully assembled nuclear weapons.
  • They then contacted peace and religious groups on the route, asking them to watch for the train, to organize a prayer vigil or a nonviolent protest when the train appeared, and to inform local newspapers about the train’s arrival.
  • But as the network of anti-nuclear activists grew, they became increasingly adept at tipping off the community if they saw an unmarked white train plow down their railways. They agency proposed new regulations that would make it illegal to pass information about the routing of the white train, but got little traction.
  • Though a group of protesters had effectively brought down the white trains, officials appeared confident that the nation’s rail network could provide an effective means of hiding weapons. By the late 1980s, the United States had 120,000 miles of available track, 20,000 locomotives, and 1.2 million railcars. At any given time, there were more than 1,700 trains on the tracks; military representatives insisted this would make it almost impossible for the Soviets to track where in the U.S. these 50 missile-laden trains had gone. “Rail-garrison will be the mainstay of our strategic defense well into the 21st century,” predicted one Texas Senator.
  • Whether waste or weapons, trains or trucks, the United States has been remarkably fortunate in avoiding major transportation mishaps. Since the days of the white trains, the government has insisted that nuclear materials are being moved across the American landscape in the safest possible way, persisting through crashes, fires, and interfering nuns. Yet public fears endure about whether moving such materials can ever truly be “safe.”
sarahbalick

Germany train crash: Several killed near Bavarian town of Bad Aibling - BBC News - 0 views

  • Germany train crash: Several killed near Bavarian town of Bad Aibling
  • The trains' operator said both trains had partially derailed and were wedged into each other.
  • nd scores more injured, police say, after two passenger trains collided in the German state of Bavaria.
  • ...4 more annotations...
  • At least nine people were killed a
  • The drivers of both trains and two train guards were among those killed, regional broadcaster Bayerischer Rundfunk said, quoting police.
  • He added: "The site is on a bend so we have to surmise that both train drivers had no visual contact before the crash and therefore crashed into each other largely without braking."
  • Bavarian Interior Minister Joachim Herrmann told the same conference it was "difficult to comprehend" how such a crash could happen given the amount of investment in railway safety following previous train accidents.
Javier E

'White Fragility' Is Everywhere. But Does Antiracism Training Work? - The New York Times - 0 views

  • DiAngelo, who is 63 and white, with graying corkscrew curls framing delicate features, had won the admiration of Black activist intellectuals like Ibram X. Kendi, author of “How to Be an Antiracist,” who praises the “unapologetic critique” of her presentations, her apparent indifference to “the feelings of the white people in the room.”
  • “White Fragility” leapt onto the New York Times nonfiction best-seller list, and next came a stream of bookings for public lectures and, mostly, private workshops and speeches given to school faculties and government agencies and university administrations and companies like Microsoft and Google and W.L. Gore & Associates, the maker of Gore-Tex.
  • As outraged protesters rose up across the country, “White Fragility” became Amazon’s No. 1 selling book, beating out even the bankable escapism of the latest “Hunger Games” installment. The book’s small publisher, Beacon Press, had trouble printing fast enough to meet demand; 1.6 million copies, in one form or other, have been sold
  • ...52 more annotations...
  • I’d been talking with DiAngelo for a year when Floyd was killed, and with other antiracism teachers for almost as long. Demand has recently spiked throughout the field, though the clamor had already been building, particularly since the election of Donald Trump
  • As their teaching becomes more and more widespread, antiracism educators are shaping the language that gets spoken — and the lessons being learned — about race in America.
  • “I will not coddle your comfort,” she went on. She gestured crisply with her hands. “I’m going to name and admit to things white people rarely name and admit.” Scattered Black listeners called out encouragement. Then she specified the predominant demographic in the packed house: white progressives. “I know you. Oh, white progressives are my specialty. Because I am a white progressive.” She paced tightly on the stage. “And I have a racist worldview.”
  • “White supremacy — yes, it includes extremists or neo-Nazis, but it is also a highly descriptive sociological term for the society we live in, a society in which white people are elevated as the ideal for humanity, and everyone else is a deficient version.” And Black people, she said, are cast as the most deficient. “There is something profoundly anti-Black in this culture.”
  • White fragility, in DiAngelo’s formulation, is far from weakness. It is “weaponized.” Its evasions are actually a liberal white arsenal, a means of protecting a frail moral ego, defending a righteous self-image and, ultimately, perpetuating racial hierarchies, because what goes unexamined will never be upended
  • At some point after our answers, DiAngelo poked fun at the myriad ways that white people “credential” themselves as not-racist. I winced. I hadn’t meant to imply that I was anywhere close to free of racism, yet was I “credentialing”?
  • the pattern she first termed “white fragility” in an academic article in 2011: the propensity of white people to fend off suggestions of racism, whether by absurd denials (“I don’t see color”) or by overly emotional displays of defensiveness or solidarity (DiAngelo’s book has a chapter titled “White Women’s Tears” and subtitled “But you are my sister, and I share your pain!”) or by varieties of the personal history I’d provided.
  • But was I being fragile? Was I being defensive or just trying to share something more personal, intimate and complex than DiAngelo’s all-encompassing sociological perspective? She taught, throughout the afternoon, that the impulse to individualize is in itself a white trait, a way to play down the societal racism all white people have thoroughly absorbed.
  • One “unnamed logic of Whiteness,” she wrote with her frequent co-author, the education professor Ozlem Sensoy, in a 2017 paper published in The Harvard Educational Review, “is the presumed neutrality of White European Enlightenment epistemology.”
  • she returned to white supremacy and how she had been imbued with it since birth. “When my mother was pregnant with me, who delivered me in the hospital — who owned the hospital? And who came in that night and mopped the floor?” She paused so we could picture the complexions of those people. Systemic racism, she announced, is “embedded in our cultural definitions of what is normal, what is correct, what is professionalism, what is intelligence, what is beautiful, what is valuable.”
  • “I have come to see white privilege as an invisible package of unearned assets that I can count on cashing in each day, but about which I was ‘meant’ to remain oblivious,” one of the discipline’s influential thinkers, Peggy McIntosh, a researcher at the Wellesley Centers for Women, has written. “White privilege is like an invisible weightless knapsack of special provisions, assurances, tools, maps, guides, codebooks, passports, visas, clothes, compass, emergency gear and blank checks.”
  • Borrowing from feminist scholarship and critical race theory, whiteness studies challenges the very nature of knowledge, asking whether what we define as scientific research and scholarly rigor, and what we venerate as objectivity, can be ways of excluding alternate perspectives and preserving white dominance
  • the Seattle Gilbert & Sullivan Society’s casting of white actors as Asians in a production of “The Mikado.” “That changed my life,” she said. The phrase “white fragility” went viral, and requests to speak started to soar; she expanded the article into a book and during the year preceding Covid-19 gave eight to 10 presentations a month, sometimes pro bono but mostly at up to $15,000 per event.
  • For almost everyone, she assumes, there is a mingling of motives, a wish for easy affirmation (“they can say they heard Robin DiAngelo speak”) and a measure of moral hunger.
  • Moore drew all eyes back to him and pronounced, “The cause of racial disparities is racism. If I show you data that’s about race, we need to be talking about racism. Don’t get caught up in detours.” He wasn’t referring to racism’s legacy. He meant that current systemic racism is the explanation for devastating differences in learning, that the prevailing white culture will not permit Black kids to succeed in school.
  • The theme of what white culture does not allow, of white society’s not only supreme but also almost-absolute power, is common to today’s antiracism teaching and runs throughout Singleton’s and DiAngelo’s programs
  • unning slightly beneath or openly on the surface of DiAngelo’s and Singleton’s teaching is a set of related ideas about the essence and elements of white culture
  • For DiAngelo, the elements include the “ideology of individualism,” which insists that meritocracy is mostly real, that hard work and talent will be justly rewarded. White culture, for her, is all about habits of oppressive thought that are taken for granted and rarely perceived, let alone questioned
  • if we were white and happened to be sitting beside someone of color, we were forbidden to ask the person of color to speak first. It might be good policy, mostly, for white people to do more listening than talking, but, she said with knowing humor, it could also be a subtle way to avoid blunders, maintain a mask of sensitivity and stay comfortable. She wanted the white audience members to feel as uncomfortable as possible.
  • The modern university, it says, “with its ‘experts’ and its privileging of particular forms of knowledge over others (e.g., written over oral, history over memory, rationalism over wisdom)” has “validated and elevated positivistic, White Eurocentric knowledge over non-White, Indigenous and non-European knowledges.”
  • the idea of a society rigged at its intellectual core underpins her lessons.
  • There is the myth of meritocracy. And valuing “written communication over other forms,” he told me, is “a hallmark of whiteness,” which leads to the denigration of Black children in school. Another “hallmark” is “scientific, linear thinking. Cause and effect.” He said, “There’s this whole group of people who are named the scientists. That’s where you get into this whole idea that if it’s not codified in scientific thought that it can’t be valid.”
  • “This is a good way of dismissing people. And this,” he continued, shifting forward thousands of years, “is one of the challenges in the diversity-equity-inclusion space; folks keep asking for data. How do you quantify, in a way that is scientific — numbers and that kind of thing — what people feel when they’re feeling marginalized?”
  • Moore directed us to a page in our training booklets: a list of white values. Along with “ ‘The King’s English’ rules,” “objective, rational, linear thinking” and “quantitative emphasis,” there was “work before play,” “plan for future” and “adherence to rigid time schedules.”
  • Moore expounded that white culture is obsessed with “mechanical time” — clock time — and punishes students for lateness. This, he said, is but one example of how whiteness undercuts Black kids. “The problems come when we say this way of being is the way to be.” In school and on into the working world, he lectured, tremendous harm is done by the pervasive rule that Black children and adults must “bend to whiteness, in substance, style and format.”
  • Dobbin’s research shows that the numbers of women or people of color in management do not increase with most anti-bias education. “There just isn’t much evidence that you can do anything to change either explicit or implicit bias in a half-day session,” Dobbin warns. “Stereotypes are too ingrained.”
  • he noted that new research that he’s revising for publication suggests that anti-bias training can backfire, with adverse effects especially on Black people, perhaps, he speculated, because training, whether consciously or subconsciously, “activates stereotypes.”
  • When we spoke again in June, he emphasized an additional finding from his data: the likelihood of backlash “if people feel that they’re being forced to go to diversity training to conform with social norms or laws.”
  • Donald Green, a professor of political science at Columbia, and Betsy Levy Paluck, a professor of psychology and public affairs at Princeton, have analyzed almost 1,000 studies of programs to lessen prejudice, from racism to homophobia, in situations from workplaces to laboratory settings. “We currently do not know whether a wide range of programs and policies tend to work on average,
  • She replied that if a criterion “consistently and measurably leads to certain people” being excluded, then we have to “challenge” the criterion. “It’s the outcome,” she emphasized; the result indicated the racism.
  • Another critique has been aimed at DiAngelo, as her book sales have skyrocketed. From both sides of the political divide, she has been accused of peddling racial reductionism by branding all white people as supremacist
  • Chislett filed suit in October against Carranza and the department. At least five other high-level, white D.O.E. executives have filed similar suits or won settlements from the city over the past 14 months. The trainings lie at the heart of their claims.
  • Chislett eventually wound up demoted from the leadership of A.P. for All, and her suit argues that the trainings created a workplace filled with antiwhite distrust and discrimination
  • whatever the merits of Chislett’s lawsuit and the counteraccusations against her, she is also concerned about something larger. “It’s absurd,” she said about much of the training she’s been through. “The city has tens of millions invested in A.P. for All, so my team can give kids access to A.P. classes and help them prepare for A.P. exams that will help them get college degrees, and we’re all supposed to think that writing and data are white values? How do all these people not see how inconsistent this is?”
  • I talked with DiAngelo, Singleton, Amante-Jackson and Kendi about the possible problem. If the aim is to dismantle white supremacy, to redistribute power and influence, I asked them in various forms, do the messages of today’s antiracism training risk undermining the goal by depicting an overwhelmingly rigged society in which white people control nearly all the outcomes, by inculcating the idea that the traditional skills needed to succeed in school and in the upper levels of the workplace are somehow inherently white, by spreading the notion that teachers shouldn’t expect traditional skills as much from their Black students, by unwittingly teaching white people that Black people require allowances, warrant extraordinary empathy and can’t really shape their own destinies?
  • With DiAngelo, my worries led us to discuss her Harvard Educational Review paper, which cited “rationalism” as a white criterion for hiring, a white qualification that should be reconsidered
  • Shouldn’t we be hiring faculty, I asked her, who fully possess, prize and can impart strong reasoning skills to students, because students will need these abilities as a requirement for high-paying, high-status jobs?
  • I pulled us away from the metaphorical, giving the example of corporate law as a lucrative profession in which being hired depends on acute reasoning.
  • They’ve just refined their analysis, with the help of two Princeton researchers, Chelsey Clark and Roni Porat. “As the study quality goes up,” Paluck told me, “the effect size dwindles.”
  • he said abruptly, “Capitalism is so bound up with racism. I avoid critiquing capitalism — I don’t need to give people reasons to dismiss me. But capitalism is dependent on inequality, on an underclass. If the model is profit over everything else, you’re not going to look at your policies to see what is most racially equitable.”
  • I was asking about whether her thinking is conducive to helping Black people displace white people on high rungs and achieve something much closer to equality in our badly flawed worl
  • it seemed that she, even as she gave workshops on the brutal hierarchies of here and now, was entertaining an alternate and even revolutionary reality. She talked about top law firms hiring for “resiliency and compassion.”
  • Singleton spoke along similar lines. I asked whether guiding administrators and teachers to put less value, in the classroom, on capacities like written communication and linear thinking might result in leaving Black kids less ready for college and competition in the labor market. “If you hold that white people are always going to be in charge of everything,” he said, “then that makes sense.”
  • He invoked, instead, a journey toward “a new world, a world, first and foremost, where we have elevated the consciousness, where we pay attention to the human being.” The new world, he continued, would be a place where we aren’t “armed to distrust, to be isolated, to hate,” a place where we “actually love.”
  • I reread “How to Be an Antiracist.” “Capitalism is essentially racist; racism is essentially capitalist,” he writes. “They were birthed together from the same unnatural causes, and they shall one day die together from unnatural causes.”
  • “I think Americans need to decide whether this is a multicultural nation or not,” he said. “If Americans decide that it is, what that means is we’re going to have multiple cultural standards and multiple perspectives. It creates a scenario in which we would have to have multiple understandings of what achievement is and what qualifications are. That is part of the problem. We haven’t decided, as a country, even among progressives and liberals, whether we desire a multicultural nation or a unicultural nation.”
  • Ron Ferguson, a Black economist, faculty member at Harvard’s John F. Kennedy School of Government and director of Harvard’s Achievement Gap Initiative, is a political liberal who gets impatient with such thinking about conventional standards and qualifications
  • “The cost,” he told me in January, “is underemphasizing excellence and performance and the need to develop competitive prowess.” With a soft, rueful laugh, he said I wouldn’t find many economists sincerely taking part in the kind of workshops I was writing about
  • “When the same group of people keeps winning over and over again,” he added, summarizing the logic of the trainers, “it’s like the game must be rigged.” He didn’t reject a degree of rigging, but said, “I tend to go more quickly to the question of how can we get prepared better to just play the game.”
  • But, he suggested, “in this moment we’re at risk of giving short shrift to dealing with qualifications. You can try to be competitive by equipping yourself to run the race that’s already scheduled, or you can try to change the race. There may be some things about the race I’d like to change, but my priority is to get people prepared to run the race that’s already scheduled.”
  • DiAngelo hopes that her consciousness raising is at least having a ripple effect, contributing to a societal shift in norms. “You’re watching network TV, and they’re saying ‘systemic racism’ — that it’s in the lexicon is kind of incredible,” she said. So was the fact that “young people understand and use language like ‘white supremacy.’”
  • We need a culture where a person who resists speaking up against racism is uncomfortable, and right this moment it looks like we’re in that culture.”
Javier E

Speedy Trains Transform China - NYTimes.com - 0 views

  • With traffic growing 28 percent a year for the last several years, China’s high-speed rail network will handle more passengers by early next year than the 54 million people a month who board domestic flights in the United States.
  • The trains hurtle along at 186 miles an hour and are smooth, well-lighted, comfortable and almost invariably punctual, if not early.
  • China’s high-speed rail system has emerged as an unexpected success story. Economists and transportation experts cite it as one reason for China’s continued economic growth when other emerging economies are faltering.
  • ...9 more annotations...
  • it has not been without costs — high debt, many people relocated and a deadly accident. The corruption trials this summer of two former senior rail ministry officials have cast an unfavorable light on the bidding process for the rail lines.
  • Chinese workers are now more productive. A paper for the World Bank by three consultants this year found that Chinese cities connected to the high-speed rail network, as more than 100 are already, are likely to experience broad growth in worker productivity. The productivity gains occur when companies find themselves within a couple of hours’ train ride of tens of millions of potential customers, employees and rivals.
  • New subway lines, rail lines and urban districts are part of China’s heavy dependence on investment-led growth.
  • Companies are opening research and development centers in more glamorous cities like Beijing and Shenzhen with abundant supplies of young, highly educated workers, and having them take frequent day trips to factories in cities with lower wages and land costs, like Tianjin and Changsha.
  • “More frequent access to my client base has allowed me to more quickly pick up on fashion changes in color and style. My orders have increased by 50 percent,”
  • China’s high-speed rail program has been married to the world’s most ambitious subway construction program, as more than half the world’s large tunneling machines chisel away underneath big Chinese cities. That has meant easy access to high-speed rail stations for huge numbers of people
  • Businesses are also customizing their products more through frequent meetings with clients in other cities, part of a broader move up the ladder toward higher value-added products.
  • Another impact: air travel. Train ridership has soared partly because China has set fares on high-speed rail lines at a little less than half of comparable airfares and then refrained from raising them. On routes that are four or five years old, prices have stayed the same as blue-collar wages have more than doubled. That has resulted in many workers, as well as business executives, switching to high-speed trains.
  • Airlines have largely halted service on routes of less than 300 miles when high-speed rail links open. They have reduced service on routes of 300 to 470 miles.
zarinastone

Whale Tail Sculpture Catches Dutch Train 30 Feet Above Ground : NPR - 0 views

  • A Dutch train burst past the end of its elevated tracks Monday in the Netherlands.
  • But instead of crashing to the ground 30 feet below, the metro train was caught — held aloft by an artist's massive sculpture of a whale's tail. Despite some damage, no injuries or deaths were reported.
  • It's unclear why the train didn't stop.
  • ...2 more annotations...
  • The architect who created the sculpture, Maarten Struijs, was shocked it held u
  • "I am amazed that it is so strong," Struijs said, according to The Guardian. "When plastic has stood for 20 years, you don't expect it to hold up a metro train."
sgardner35

Amtrak Train Derails in Philadelphia, Killing at Least 6 and Injuring Dozens - NYTimes.com - 0 views

  • bound Amtrak train that derailed and overturned late Tuesday, killing six people, injuring dozens more, and disrupting train service for thousands of riders in the Northeast region.
  • The train had at least seven cars, including the engine, which separated from the rest, officials said. Six cars overturned. At least one looked as bent as a crumpled soda can, and parts of the damaged cars were so badly mangled that firefighters had to use hydraulic tools to rescue people trapped inside.
  • On Wednesday, Temple University Hospital said it had received 54 people from the wreck. Herbert E. Cushing, the chief medical officer, said one person died overnight from a massive chest injury, and 25 remained in the hospital, including eight people in critical condition.
  • ...4 more annotations...
  • Early on Wednesday, Mr. Nutter said officials had still not accounted for everyone on board.
  • Still, the derailment on Tuesday took place in roughly the same area of track that was the site of one of the nation’s deadliest rail accidents.
  • Amtrak canceled service between New York and Philadelphia, and modified three other routes. Officials said New Jersey Transit would honor Amtrak tickets between New York City and Trenton.
  • Into the early morning, train cancellations piled up, not just from Amtrak but also from New Jersey Transit and other services that use the same section of track that is now mangled.
Javier E

Inside a Battle Over Race, Class and Power at Smith College - The New York Times - 0 views

  • NORTHAMPTON, Mass. — In midsummer of 2018, Oumou Kanoute, a Black student at Smith College, recounted a distressing American tale: She was eating lunch in a dorm lounge when a janitor and a campus police officer walked over and asked her what she was doing there.
  • The officer, who could have been carrying a “lethal weapon,” left her near “meltdown,” Ms. Kanoute wrote on Facebook, saying that this encounter continued a yearlong pattern of harassment at Smith.
  • “All I did was be Black,” Ms. Kanoute wrote. “It’s outrageous that some people question my being at Smith College, and my existence overall as a woman of color.”
  • ...42 more annotations...
  • The college’s president, Kathleen McCartney, offered profuse apologies and put the janitor on paid leave. “This painful incident reminds us of the ongoing legacy of racism and bias,” the president wrote, “in which people of color are targeted while simply going about the business of their ordinary lives.”
  • a law firm hired by Smith College to investigate the episode found no persuasive evidence of bias. Ms. Kanoute was determined to have eaten in a deserted dorm that had been closed for the summer; the janitor had been encouraged to notify security if he saw unauthorized people there. The officer, like all campus police, was unarmed.
  • Smith College officials emphasized “reconciliation and healing” after the incident. In the months to come they announced a raft of anti-bias training for all staff, a revamped and more sensitive campus police force and the creation of dormitories — as demanded by Ms. Kanoute and her A.C.L.U. lawyer — set aside for Black students and other students of color.
  • But they did not offer any public apology or amends to the workers whose lives were gravely disrupted by the student’s accusation.
  • The atmosphere at Smith is gaining attention nationally, in part because a recently resigned employee of the school, Jodi Shaw, has attracted a fervent YouTube following by decrying what she sees as the college’s insistence that its white employees, through anti-bias training, accept the theory of structural racism.
  • The story highlights the tensions between a student’s deeply felt sense of personal truth and facts that are at odds with it.
  • Those tensions come at a time when few in the Smith community feel comfortable publicly questioning liberal orthodoxy on race and identity, and some professors worry the administration is too deferential to its increasingly emboldened students.
  • “My perception is that if you’re on the wrong side of issues of identity politics, you’re not just mistaken, you’re evil,” said James Miller, an economics professor at Smith College and a conservative.
  • Faculty members, however, pointed to a pattern that they say reflects the college’s growing timidity in the face of allegations from students, especially around the issue of race and ethnicity.
  • In 2016, students denounced faculty at Smith’s social work program as racist after some professors questioned whether admissions standards for the program had been lowered and this was affecting the quality of the field work. Dennis Miehls, one of the professors they decried, left the school not long after.
  • This is a tale of how race, class and power collided at the elite 145-year-old liberal arts college, where tuition, room and board top $78,000 a year and where the employees who keep the school running often come from working-class enclaves beyond the school’s elegant wrought iron gates
  • “Stop demanding that I admit to white privilege, and work on my so-called implicit bias as a condition of my continued employment,”
  • Student workers were not supposed to use the Tyler cafeteria, which was reserved for a summer camp program for young children. Jackie Blair, a veteran cafeteria employee, mentioned that to Ms. Kanoute when she saw her getting lunch there and then decided to drop it. Staff members dance carefully around rule enforcement for fear students will lodge complaints.
  • “We used to joke, don’t let a rich student report you, because if you do, you’re gone,” said Mark Patenaude, a janitor.
  • A well-known older campus security officer drove over to the dorm. He recognized Ms. Kanoute as a student and they had a brief and polite conversation, which she recorded. He apologized for bothering her and she spoke to him of her discomfort: “Stuff like this happens way too often, where people just feel, like, threatened.”
  • That night Ms. Kanoute wrote a Facebook post: “It’s outrageous that some people question my being at Smith, and my existence overall as a woman of color.”
  • Her two-paragraph post hit Smith College like an electric charge. President McCartney weighed in a day later. “I begin by offering the student involved my deepest apology that this incident occurred,” she wrote. “And to assure her that she belongs in all Smith places.”
  • Ms. McCartney did not speak to the accused employees and put the janitor on paid leave that day.
  • Ms. McCartney appeared intent on making no such missteps in 2018. In an interview, she said that Ms. Kanoute deserved an apology and swift action, even before the investigation was undertaken. “It was appropriate to apologize,” Ms. McCartney said. “She is living in a context of ‘living while Black’ incidents.”The school’s workers felt scapegoated.
  • “It is safe to say race is discussed far more often than class at Smith,” said Prof. Marc Lendler, who teaches American government at the college. “It’s a feature of elite academic institutions that faculty and students don’t recognize what it means to be elite.”
  • The repercussions spread. Three weeks after the incident at Tyler House, Ms. Blair, the cafeteria worker, received an email from a reporter at The Boston Globe asking her to comment on why she called security on Ms. Kanoute for “eating while Black.” That puzzled her; what did she have to do with this?
  • The food services director called the next morning. “Jackie,” he said, “you’re on Facebook.” She found that Ms. Kanoute had posted her photograph, name and email, along with that of Mr. Patenaude, a 21-year Smith employee and janitor.
  • “This is the racist person,” Ms. Kanoute wrote of Ms. Blair, adding that Mr. Patenaude too was guilty. (He in fact worked an early shift that day and had already gone home at the time of the incident.) Ms. Kanoute also lashed the Smith administration. “They’re essentially enabling racist, cowardly acts.”
  • Ms. Blair was born and raised and lives in Northampton with her husband, a mechanic, and makes about $40,000 a year. Within days of being accused by Ms. Kanoute, she said, she found notes in her mailbox and taped to her car window. “RACIST” read one. People called her at home. “You should be ashamed of yourself,” a caller said. “You don’t deserve to live,” said another.
  • Smith College put out a short statement noting that Ms. Blair had not placed the phone call to security but did not absolve her of broader responsibility. Ms. McCartney called her and briefly apologized. That apology was not made public.
  • By September, a chill had settled on the campus. Students walked out of autumn convocation in solidarity with Ms. Kanoute. The Black Student Association wrote to the president saying they “do not feel heard or understood. We feel betrayed and tokenized.”
  • Smith officials pressured Ms. Blair to go into mediation with Ms. Kanoute. “A core tenet of restorative justice,” Ms. McCartney wrote, “is to provide people with the opportunity for willing apology, forgiveness and reconciliation.”
  • Ms. Blair declined. “Why would I do this? This student called me a racist and I did nothing,” she said.
  • On Oct. 28, 2018, Ms. McCartney released a 35-page report from a law firm with a specialty in discrimination investigations. The report cleared Ms. Blair altogether and found no sufficient evidence of discrimination by anyone else involved, including the janitor who called campus police.
  • Still, Ms. McCartney said the report validated Ms. Kanoute’s lived experience, notably the fear she felt at the sight of the police officer. “I suspect many of you will conclude, as did I,” she wrote, “it is impossible to rule out the potential role of implicit racial bias.”
  • Ms. McCartney offered no public apology to the employees after the report was released. “We were gobsmacked — four people’s lives wrecked, two were employees of more than 35 years and no apology,” said Tracey Putnam Culver, a Smith graduate who recently retired from the college’s facilities management department. “How do you rationalize that?”
  • Rahsaan Hall, racial justice director for the A.C.L.U. of Massachusetts and Ms. Kanoute’s lawyer, cautioned against drawing too much from the investigative report, as subconscious bias is difficult to prove. Nor was he particularly sympathetic to the accused workers.
  • “It’s troubling that people are more offended by being called racist than by the actual racism in our society,” he said. “Allegations of being racist, even getting direct mailers in their mailbox, is not on par with the consequences of actual racism.”
  • Ms. Blair was reassigned to a different dormitory, as Ms. Kanoute lived in the one where she had labored for many years. Her first week in her new job, she said, a female student whispered to another: There goes the racist.
  • Anti-bias training began in earnest in the fall. Ms. Blair and other cafeteria and grounds workers found themselves being asked by consultants hired by Smith about their childhood and family assumptions about race, which many viewed as psychologically intrusive. Ms. Blair recalled growing silent and wanting to crawl inside herself.
  • The faculty are not required to undergo such training. Professor Lendler said in an interview that such training for working-class employees risks becoming a kind of psychological bullying. “My response would be, ‘Unless it relates to conditions of employment, it’s none of your business what I was like growing up or what I should be thinking of,’” he said.
  • In addition to the training sessions, the college has set up “White Accountability” groups where faculty and staff are encouraged to meet on Zoom and explore their biases, although faculty attendance has fallen off considerably.
  • The janitor who called campus security quietly returned to work after three months of paid leave and declined to be interviewed. The other janitor, Mr. Patenaude, who was not working at the time of the incident, left his job at Smith not long after Ms. Kanoute posted his photograph on social media, accusing him of “racist cowardly acts.”
  • “I was accused of being the racist,” Mr. Patenaude said. “To be honest, that just knocked me out. I’m a 58-year-old male, we’re supposed to be tough. But I suffered anxiety because of things in my past and this brought it to a whole ’nother level.”
  • He recalled going through one training session after another in race and intersectionality at Smith. He said it left workers cynical. “I don’t know if I believe in white privilege,” he said. “I believe in money privilege.”
  • This past autumn the university furloughed her and other workers, citing the coronavirus and the empty dorms. Ms. Blair applied for an hourly job with a local restaurant. The manager set up a Zoom interview, she said, and asked her: “‘Aren’t you the one involved in that incident?’”
  • “I was pissed,” she said. “I told her I didn’t do anything wrong, nothing. And she said, ‘Well, we’re all set.’”
peterconnelly

Maglev train: China debuts prototype that can hit speeds of 620 kilometers per hour | C... - 0 views

  • (CNN) — China has revealed a prototype for a new high-speed Maglev train that is capable of reaching speeds of 620 kilometers (385 miles) per hour.
  • The train runs on high-temperature superconducting (HTS) power that makes it look as if the train is floating along the magnetized tracks.
  • The sleek 21-meter-long (69 feet) prototype was unveiled to media in the city of Chengdu, Sichuan Province, on January 13. In addition, university researchers constructed 165 meters (541 feet) of track to demonstrate how the train would look and feel in transit, according to state-run Xinhua News.
  • ...2 more annotations...
  • This time last year, China unveiled a new 174-kilometer high-speed railway line connecting Beijing with 2022 Winter Olympics host city Zhangjiakou, cutting the travel time between the two from three hours to 47 minutes.
  • It will run on routes between Beijing, Shenyang and Harbin -- the latter of which is so cold that it hosts an annual snow and ice festival.
Javier E

The Only Way to Deal With the Threat From AI? Shut It Down | Time - 0 views

  • An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-
  • This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin
  • he rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.
  • ...25 more annotations...
  • The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
  • Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
  • It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
  • Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
  • Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.
  • The likely result of humanity facing down an opposed superhuman intelligence is a total loss
  • To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
  • There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.
  • An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria.
  • I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.
  • I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.
  • the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.
  • If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow.
  • We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems
  • Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs.
  • This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.
  • When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.
  • The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth
  • Here’s what would actually need to be done:
  • Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs
  • Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithm
  • Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
  • Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool
  • Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
  • when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
Javier E

Europe's flight-shame movement has travelers taking trains to save the planet - The Was... - 0 views

  • Budget airlines such as Ireland’s Ryanair and British easyJet revolutionized European travel two decades ago, when they first started offering to scoot people across the continent for as little as $20 a flight. That mode of travel, once celebrated as an opening of the world, is now being recognized for its contribution to global problems.
  • Tourists have been spooked by the realization that one passenger’s share of the exhaust from a single flight can cancel out a year’s worth of Earth-friendly efforts
  • “Now, when people tell me why they are taking the train, they say two things in the same breath: They say they are fed up with the stress of flying, and they want to cut their carbon footprint,
  • ...18 more annotations...
  • So far, the biggest shift has been in green-conscious Sweden, where airline executives blame increased train travel — up one-third this summer compared with a year ago — for a drop in air passenger traffic.
  • The newly coined concept of flygskam, or “flight shame,” has turned some Swedes bashful about their globe-trotting. A guerrilla campaign used Instagram to tally the planet-busting travels of top Swedish celebrities.
  • Hilm, 31, a health-care consultant who was on his way to hike across Austria for eight days, said he tried to live an environmentally responsible life. “I don’t drive a car. I eat mostly vegetarian. I live in an apartment, not a big house.”
  • He was stunned when he assessed the impact of his flights. “I did one of those calculators you can do online,” he said, “and 80 percent of my emissions were from travel.
  • “I don’t want to say I’ll never fly again, but I do want to be conscious about the decisions I make,”
  • What was it worth? Measuring carbon dioxide emissions from travel can be an inexact science. One popular online calculator suggested that Hilm’s trip would have led to about 577 pounds of carbon dioxide emissions if he had flown, compared with 118 pounds by rail, a savings of 80 percent.
  • In the first six months of 2019, air passenger traffic was down 3.8 percent in Sweden compared with the previous year. Climate concerns are among several reasons for the downtur
  • Across Europe, air travel still ticked up — by 4.4 percent — in the first quarter of 2019
  • for young, green Europeans, saying no to flying is becoming a thing.
  • The shift has been inspired in part by Greta Thunberg
  • Thunberg has not been on a plane since 2015. This week, she said she would soon travel to the United States — by sailboat.
  • called Tagsemester, or Train Vacatio
  • The aviation sector generates about 2.5 percent of global carbon dioxide emissions — meaning it’s only a small fraction of the problem
  • Jet fuel is currently untaxed in the E.U., unlike in the United States. France this month announced it would introduce an eco-tax on flights originating at French airports, with the money to be reinvested in rail networks and other environmentally friendly transport. Several other European countries have imposed or increased flight taxes. The Dutch government is lobbying for an E.U.-wide tax on aviation.
  • SAS, the largest airline in Scandinavia, is ending in-flight duty-free sales and asking passengers to pre-book meals so planes can be lighter and more fuel-efficient. Pilots have been urged to taxi on the ground with only one engine switched on.
  • He said the airline was pushing to expand its use of renewable fuels as quickly as possible.
  • Climate change experts caution that meaningful shifts will need to happen on a structural level that goes beyond any individual’s private actions. 
  • “In terms of personal climate activism broadly, whether you’re talking about aviation, reducing the amount of meat you eat, consumption choices, the answer is always: It is important, but it is insufficient,” said Greg Carlock, a manager at the World Resources Institute, a Washington think tank.
saberal

Opinion | Policing Is Not Broken, It's 'Literally Designed to Work in This Way' - The N... - 0 views

  • Last week, an anxious America awaited the jury’s decision. Officer Derek Chauvin was convicted on all charges for the murder of George Floyd. But whatever feelings greeted such a rare outcome were short-lived for many. The next day, a Virginia man named Isaiah Brown was on the phone with 911 police dispatch when a sheriff’s deputy shot him 10 times, allegedly mistaking the phone for a gun.
  • Today, I’ve gathered three guests who approach reform differently to see where we agree and don’t. Rashawn Ray is a fellow at the Brookings Institute and a professor of sociology at the University of Maryland. Randy Shrewsberry is a former police officer. He’s now the executive director of the Institute for Criminal Justice Training Reform. And Ash-Lee Woodard Henderson is the first Black woman to serve as co-executive director of the Highlander Research and Education Center in Tennessee, a social justice training center where seminal figures like Rosa Parks trained.
  • Right, I think that we see so much of what policing has looked like, which is about the criminalization of poverty. I think it’s important to note here that this is something that I want to emphasize that police and justice impacts everyone with the cases of someone like Daniel Shaver, who was shot to death while crying on the floor, or Tony Timpa, who is held down by police while they laughed on body cam, and how much of this is the policing of poverty and the policing of what we think police are supposed to be doing is not what they’re doing. And so, Rashawn, I want to hear from you. You’ve done so much work on this. What are your top priorities when it comes to reforming policing?
  • ...11 more annotations...
  • And I think we’ve seen that there is an expectation in this country of who is supposed to be policed and who is not supposed to be policed, that you’re supposed to go police those people over there, but if you order me to wear a mask, well, that’s just too much here. And we see time and time again that most killings by police start with traffic stops, mental health checks, domestic disturbances, low level offenses. We’ve seen with the cases of Philando Castile and others that traffic stops can be deadly. Randy, where does this come from? Why is the focus on low level offenses and not solving murders? I think a lot of people think that the police are focused on catching criminals, when that’s not really what they do.
  • Yeah. I mean, I think lovingly, I came to this position because we’ve been putting platinum bandaids and piecemeal reforms into place. And it hasn’t made policing any better for Black people or poor people or immigrant people, right? When we talk about defunding the police, we’re not just talking about the sheriff in your county or the P.D. in your inner city neighborhood. We’re talking about the state police. We’re talking about Capitol police who we literally watched hand-walk insurrectionists out of the Capitol on January 6. We’re talking about immigrant communities that are impacted by I.C.E., right? We’re talking about Customs and Border Patrol.
  • And the roots are embedded in white supremacy ideology that oftentimes we’re unwilling to admit. The other thing, good apples can’t simply override bad apples. Yes, overwhelmingly, officers get into it because they want to protect and serve. But we just heard from Randy what happens in that process. Good apples become poisoned. And they also can at times become rotten themselves. Because part of what happens is that they get swallowed up in the system. And due to qualified immunity, they are completely alleviated from any sort of financial culpability. And I think insurances can be a huge way to increase accountability.
  • So part of what we have to think through is better solutions. And what the research I’ve conducted suggests is that if we reallocate some of those calls for service, not only are there better people in the social service sector, such as mental health specialists or Department of Transportation better equipped to handle those things, but also police officers can then focus on the more violent crimes and increasing that clearance rate.
  • That’s how we got Ferguson, right? That’s how we ended up with the death of Michael Brown. So what all of this led me to is when you follow the money, just over the past five years, in the major 20 metropolitan areas in the United States, taxpayers have paid out over $2 billion with a B in settlements for police misconduct. Oftentimes, people are paying for their own brutality, so outside of police budgets, which have swelled over the past three decades. I mean, you have everything from over 40 percent in Oakland to well over 35 percent in cities like Chicago and Minneapolis, that these civilian payouts don’t even come from the police budget. And what it led me to is that if we had police department insurance policies, if we had more police officer malpractice individual liability insurance, we would see not only a shift in financial culpability, but also a shift in accountability.
  • How do we keep people safe if we defund the police? But I bet if I asked you, Jane or Rashawn or Randy, to close your eyes and tell me a time where you felt safe, what did it feel like, you wouldn’t tell me that there was a cop there. And if it was, it would probably be because that cop might have been your dad or your mom or your aunt or your uncle, right? Not because they were in their uniform in a cop car policing somebody else. So quite frankly, I think the only solution to policing in this country is abolition. And how do we get there through divestment and investment is really super clear.
  • Do I think that we can reform our way out of the crisis of policing in this country? I do not. And I don’t because I’ve seen so many times us try. I’ve seen us say that if we just trained them more, it would be different. I’ve seen us say, if we just banned no-knock warrants, it would be different. I’ve seen us say, if we just got body cams on these cops, which is more and more and more money going to policing, but what we’ve seen is that that hasn’t distracted or detracted them because they can continue to use reasonable force as their get out of jail and accountability-free card. So I just don’t believe that the data shows that reforming our way out of policing is keeping Black people free and alive.
  • But you know what? They did. But you know what also survived those historical periods? Law enforcement. You know why? Because law enforcement is the gatekeeper of legalized state sanctioned violence. Law enforcement abolition probably requires a revolution we haven’t seen before. Part of what abolitionists also want — because I think there are two main camps. There are some that are like, law enforcement shouldn’t exist. Prisons shouldn’t exist. There are others who are like, look, we need to reimagine it. Like those rotten trees, we need to cut it down. When you deal with a rotten tree or a rotten plant, simply cutting it down doesn’t make it go away. The roots come back, right? And oftentimes, the plant comes back stronger. And interestingly, it comes back in a different form, like it’s wrapped in a different package. And so, but there are some people who say, how about we address abolition from the standpoint of abolishing police departments as they currently stand and reimagining and rebuilding public safety in a way that’s different? See, even the terminology we use is really important — policing, law enforcement, public safety. Part of reimagining law enforcement is reimagining the terms we use for what safety means. And how I think about it is, who has the right to truly express their First Amendment right and be verbally and/or nonviolently expressive? It’s not illegal to be combative.
  • And one of my colleagues was reading a clip. And he was saying, yeah, we need more police surveillance. We need to make sure that we watch what they’re doing. We need more training. This clip was from the 1980s, almost around the same time where Ash was talking about she was born.
  • The United States taxpayer is essentially asked to foot this impossible and never-ending bill to maintain this failed system of policing, right? I want to pull a little bit on Randy’s last point and what Dr. Ray raised about guns as well. It’s like even Forbes, I think, last week mentioned that more than one mass shooting per day has occurred in 2021. And so if cops keep me safe from gun violence, this stat wouldn’t be real, right? So if police officers were keeping Black people safe from gun violence, the world will be a very different place. And I doubt we would be having this conversation in the first place. We’ve got to actually be innovative beyond the request for support for more money for more trainings, for more technology. And so, quite frankly, when we think about what’s happening on the federal level legislatively right now with the Justice and Policing Act, I think the movement for Black — well, not I think — I know the movement for Black Lives unequivocally doesn’t support it. Because, again, it’s an attempt at 1990 solutions to a 2021 problem
  • If you want to learn more about police reform, I recommend reading the text of the George Floyd Justice and Policing Act of 2021. I also recommend The New York Times Magazine piece that features a roundtable of experts and organizers. It’s called, “The message is clear: policing in America is broken and must change.
Javier E

The Nightjet: A Big Bet on Train Travelers Who Take It Slow - The New York Times - 0 views

  • ÖBB said it expected ridership on Nightjet to increase 10 percent by the end of this year, to 1.5 million passengers, a rise fueled by people who want to avoid flying.
  • Prices for a seat to Venice start at 29.90 euros ($33) one way, which is still competitive with airfares, but they quickly climb to more than €100 for a sleeper cabin shared with two others.
  • Night trains, research has shown, actually cost more to operate because they are less efficient than daytime services. Dick Dunmore was the lead author of a 2017 study into night trains by Steer Group, a consulting firm, for the European Parliament. He said the main obstacles for night trains were track access charges, low occupancy in sleeping carriages, once-a-day routes and the complexity of staffing at night.
hannahcarter11

Store Workers to Get New Training: How to Handle Fights Over Masks - The New York Times - 0 views

    • hannahcarter11
       
      I can't even imagine how someone could get mad at someone else for trying to protect their safety. Even if the pandemic is a hoax, which it certainly is not, why not just be safe rather than sorry?
  • The training puts a spotlight on the unexpected challenges that store workers have been forced to grapple with during the pandemic.
  • Susan Driscoll, president of the Crisis Prevention Institute, said the online training program and accompanying Covid-19 Customer Conflict Prevention credential are “really focused on how to engage your thinking brain over your emotional brain.
  • ...6 more annotations...
    • hannahcarter11
       
      They are trying to appeal to the customer's rational brain instead of emotional (rider instead of elephant). This is smart but will be difficult.
  • the program offers tips on “how to verbally and nonverbally communicate empathy and support” while wearing a mask
  • Or, Ms. Driscoll said, “when someone is defensive and losing their rationality, you give them a choice or set a limit.”
  • inquiries to the organization for de-escalation information have doubled since the pandemic started
  • The National Retail Federation said it did not have data on disputes at retailers
  • Many retail workers will receive a new sort of preparation for this year’s holiday season: training on how to manage conflicts with customers who resist mask-wearing, social distancing and store capacity limits
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 0 views

  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • ...46 more annotations...
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • I met with Kahneman
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
ethanmoser

Stropped train triggers major political row in Balkans | Fox News - 0 views

  • Stropped train triggers major political row in Balkans
  • A Serbian train halted at the border with Kosovo and bearing signs reading "Kosovo is Serbian," has fueled a major crisis in the Balkans and escalated a potential Russia-West row over dominance in the heart of the Balkans.
  • Serbia accused Kosovo's leaders on Sunday of "wanting war" and warned that it would defend "every inch" of its territory, a day after the train, decorated in Serbian Christian Orthodox symbols and flags, was prevented from entering the neighboring nation.
  • ...5 more annotations...
  • Kosovo, supported by much of the West, declared independence from Serbia in 2008. But, Serbia and its Slavic Orthodox ally, Russia, do not recognize the split.
  • "Yesterday, we were on the verge of clashes," Nikolic said after a meeting of the country's top security body following the train's overnight return to Belgrade. He accused the Kosovo Albanians of "wanting war."
  • "We are a country which has to protect its people and its territory," Nikolic said, in the strongest rhetoric since the NATO-led troops took control of Kosovo's borders in 1999.
  • Tensions between Serbia and Kosovo have soared following the recent detention in France of Ramush Haradinaj, a former Kosovo prime minister, on an arrest warrant from Serbia.
  • Kosovo has called the warrant illegitimate and urged France to ignore it, while Serbia is urging Haradinaj's quick extradition to face war crimes charges.
Javier E

HR Isn't Stopping Workplace Sexual Harassment - The Atlantic - 0 views

  • If HR is such a vital component of American business, its tentacles reaching deeply into many spheres of employees’ work lives, how did it miss the kind of sexual harassment at the center of the #MeToo movement? And given that it did, why are companies still putting so much faith in HR
  • The simple and unpalatable truth is that HR isn’t bad at dealing with sexual harassment. HR is actually very good at it.
  • On The Office, Michael Scott once said of Toby, the Dunder Mifflin HR rep: “If I had a gun with two bullets, and I was in a room with Hitler, bin Laden, and Toby, I would shoot Toby twice.”
  • ...9 more annotations...
  • Fairly or not, HR is seen as the division of the company that slows things down, generates endless memos, meddles in employees’ personal business, holds compulsory “trainings,” and ruins any fun and spirit-lifting thing employees come up with
  • the real reason many workers don’t love human resources is that while the department often presents itself as functioning like a union—the open door for worker complaints, the updates on valuable new benefits—it is not a union
  • should the economy change, or should management decide to go in another direction, HR can just as quickly become assassin as friend
  • What HR is actually responsible for—one of the central ways the department “adds value” to a company—is serving as the first line of defense against a sexual-harassment lawsuit
  • The task force had been charged with determining how much progress the country had made since that historic decision. Its finding: very little. “Much of the training done over the last 30 years has not worked as a prevention tool,” the task force found. That’s an incredible statement—three decades of failure.
  • It reveals that sexual harassment is “widespread” and “persistent,” and that 85 percent of workers who are harassed never report it. It found that employees are much more likely to come up with their own solution—such as avoiding the harasser, downplaying the harassment, or simply enduring it—than to seek help from HR. They are far more likely to ask a family member or co-worker for advice than to file a complaint, because they fear that they will face repercussions if they do.
  • This is why all of that training—the videos and online courses and worksheets—seems so useless: because it’s designed to serve as a defense against an employment lawsuit. The task force cited a study that found “no evidence that the training affected the frequency of sexual harassment experienced by the women in the workplace.” The task force also said that HR trainings and procedures are “too focused on protecting the employer from liability,” and not focused enough on ending the problem.
  • Most of the time, if the man is truly important to the company, the case is quickly whisked out of HR’s hands, the investigation delivered to lawyers and the final decision rendered by executives. These executives are under no legal imperative to terminate an alleged offender or even to enforce a particular sanction, only to ensure that the woman who made the report is safe in the future.
  • there is only one way to eradicate harassment from a workplace: by creating a climate and culture that starts at the very top of the company and establishes that harassment is not tolerated and will be punished severely. Middle managers can’t change the culture of a company;
Javier E

How to Prepare for an Automated Future - The New York Times - 0 views

  • We don’t know how quickly machines will displace people’s jobs, or how many they’ll take, but we know it’s happening — not just to factory workers but also to money managers, dermatologists and retail workers.
  • The logical response seems to be to educate people differently, so they’re prepared to work alongside the robots or do the jobs that machines can’t. But how to do that, and whether training can outpace automation, are open questions.
  • Pew Research Center and Elon University surveyed 1,408 people who work in technology and education to find out if they think new schooling will emerge in the next decade to successfully train workers for the future. Two-thirds said yes; the rest said n
  • ...18 more annotations...
  • People still need to learn skills, the respondents said, but they will do that continuously over their careers. In school, the most important thing they can learn is how to learn.
  • At universities, “people learn how to approach new things, ask questions and find answers, deal with new situations,”
  • many survey respondents said a degree was not enough — or not always the best choice, especially given its price tag.
  • these are not necessarily easy to teach.
  • “Many of the ‘skills’ that will be needed are more like personality characteristics, like curiosity, or social skills that require enculturation to take hold,
  • “I have complete faith in the ability to identify job gaps and develop educational tools to address those gaps,” wrote Danah Boyd, a principal researcher at Microsoft Research and founder of Data and Society, a research institute. “I have zero confidence in us having the political will to address the socioeconomic factors that are underpinning skill training.”
  • Andrew Walls, managing vice president at Gartner, wrote, “Barring a neuroscience advance that enables us to embed knowledge and skills directly into brain tissue and muscle formation, there will be no quantum leap in our ability to ‘up-skill’ people.
  • Schools will also need to teach traits that machines can’t yet easily replicate, like creativity, critical thinking, emotional intelligence, adaptability and collaboration.
  • Many of them expect more emphasis on certificates or badges, earned from online courses or workshops, even for college graduates.
  • One potential future, said David Karger, a professor of computer science at M.I.T., would be for faculty at top universities to teach online and for mid-tier universities to “consist entirely of a cadre of teaching assistants who provide support for the students.”
  • Portfolios of work are becoming more important than résumés.
  • “Three-dimensional materials — in essence, job reels that demonstrate expertise — will be the ultimate demonstration of an individual worker’s skills.”
  • Consider it part of your job description to keep learning, many respondents said — learn new skills on the job, take classes, teach yourself new things.
  • Focus on learning how to do tasks that still need humans, said Judith Donath of Harvard’s Berkman Klein Center for Internet & Society: teaching and caregiving; building and repairing; and researching and evaluating
  • The problem is that not everyone is cut out for independent learning, which takes a lot of drive and discipline.
  • People who are suited for it tend to come from privileged backgrounds, with a good education and supportive parents,
  • “The fact that a high degree of self-direction may be required in the new work force means that existing structures of inequality will be replicated in the future,”
  • “The ‘jobs of the future’ are likely to be performed by robots,” said Nathaniel Borenstein, chief scientist at Mimecast, an email company. “The question isn’t how to train people for nonexistent jobs. It’s how to share the wealth in a world where we don’t need most people to work.”
aidenborst

US troops accidentally storm olive oil factory in Bulgaria - CNNPolitics - 0 views

  • The US military has issued an apology after soldiers accidentally stormed a factory in Bulgaria that produces processing machinery for olive oil during a training exercise last month.
  • "The U.S. Army takes training seriously and prioritizes the safety of our soldiers, our allies, and civilians. We sincerely apologize to the business and its employees," the US military said in the statement. "We always learn from these exercises and are fully investigating the cause of this mistake. We will implement rigorous procedures to clearly define our training areas and prevent this type of incident in the future."
  • "they believed was part of the training area, but that was occupied by Bulgarian civilians operating a private business." No weapons were fired, the US military also said.
  • ...3 more annotations...
  • US soldiers of the 173rd Airborne Brigade had been practicing for days how to seize and secure the Cheshnegirovo decommissioned airfield in Bulgaria, training that included clearing bunkers across the airfield, according to a statement from the US Army Europe and Africa released Tuesday.
  • CNN has reached out to the factory owner, US embassy in Bulgaria, Bulgaria's Interior Ministry and Defense Ministry for comment.Read MoreBulgarian President Rumen Radev condemned the incident and said he expects there will be an investigation, CNN affiliate Nova TV reported Monday.
  • "It is inadmissible to have the lives of Bulgarian citizens disturbed and put at risk by military formations, whether Bulgarian or belonging to a foreign army," Radev said. "The exercises with our allies on the territory of Bulgaria should contribute to building security and trust in collective defense, not breed tension."
dytonka

Trump's diversity training order faces lawsuit - 0 views

  • Trump said such training is “teaching people to hate our country.”
    • dytonka
       
      I mean well, is there much to love about it?
  • The lawsuit, however, said the wording of the order is overly broad and is already having a chilling effect on diversity training. Some organizations have asked that words including “systemic racism” and “white privilege” be banned from training, the complaint said. It also cited the University of Iowa’s decision to suspend its diversity efforts for fear of losing government funding.
1 - 20 of 732 Next › Last »
Showing 20 items per page