Skip to main content

Home/ History Readings/ Group items tagged robotics

Rss Feed Group items tagged

Javier E

Welcome, Robot Overlords. Please Don't Fire Us? | Mother Jones - 0 views

  • There will be no place to go but the unemployment line.
  • Slowly but steadily, labor's share of total national income has gone down, while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.
  • at this point our tale takes a darker turn. What do we do over the next few decades as robots become steadily more capable and steadily begin taking away all our jobs?
  • ...34 more annotations...
  • The economics community just hasn't spent much time over the past couple of decades focusing on the effect that machine intelligence is likely to have on the labor marke
  • The Digital Revolution is different because computers can perform cognitive tasks too, and that means machines will eventually be able to run themselves. When that happens, they won't just put individuals out of work temporarily. Entire classes of workers will be out of work permanently. In other words, the Luddites weren't wrong. They were just 200 years too early
  • while it's easy to believe that some jobs can never be done by machines—do the elderly really want to be tended by robots?—that may not be true.
  • Robotic pets are growing so popular that Sherry Turkle, an MIT professor who studies the way we interact with technology, is uneasy about it: "The idea of some kind of artificial companionship," she says, "is already becoming the new normal."
  • robots will take over more and more jobs. And guess who will own all these robots? People with money, of course. As this happens, capital will become ever more powerful and labor will become ever more worthless. Those without money—most of us—will live on whatever crumbs the owners of capital allow us.
  • Economist Paul Krugman recently remarked that our long-standing belief in skills and education as the keys to financial success may well be outdated. In a blog post titled "Rise of the Robots," he reviewed some recent economic data and predicted that we're entering an era where the prime cause of income inequality will be something else entirely: capital vs. labor.
  • We're already seeing them, and not just because of the crash of 2008. They started showing up in the statistics more than a decade ago. For a while, though, they were masked by the dot-com and housing bubbles, so when the financial crisis hit, years' worth of decline was compressed into 24 months. The trend lines dropped off the cliff.
  • In the economics literature, the increase in the share of income going to capital owners is known as capital-biased technological change
  • The question we want to answer is simple: If CBTC is already happening—not a lot, but just a little bit—what trends would we expect to see? What are the signs of a computer-driven economy?
  • if automation were displacing labor, we'd expect to see a steady decline in the share of the population that's employed.
  • Second, we'd expect to see fewer job openings than in the past.
  • Third, as more people compete for fewer jobs, we'd expect to see middle-class incomes flatten in a race to the bottom.
  • Fourth, with consumption stagnant, we'd expect to see corporations stockpile more cash and, fearing weaker sales, invest less in new products and new factories
  • Fifth, as a result of all this, we'd expect to see labor's share of national income decline and capital's share rise.
  • There will be no place to go but the unemployment line.
  • The modern economy is complex, and most of these trends have multiple causes.
  • in another sense, we should be very alarmed. It's one thing to suggest that robots are going to cause mass unemployment starting in 2030 or so. We'd have some time to come to grips with that. But the evidence suggests that—slowly, haltingly—it's happening already, and we're simply not prepared for it.
  • the first jobs to go will be middle-skill jobs. Despite impressive advances, robots still don't have the dexterity to perform many common kinds of manual labor that are simple for humans—digging ditches, changing bedpans. Nor are they any good at jobs that require a lot of cognitive skill—teaching classes, writing magazine articles
  • in the middle you have jobs that are both fairly routine and require no manual dexterity. So that may be where the hollowing out starts: with desk jobs in places like accounting or customer support.
  • In fact, there's even a digital sports writer. It's true that a human being wrote this story—ask my mother if you're not sure—but in a decade or two I might be out of a job too
  • Doctors should probably be worried as well. Remember Watson, the Jeopardy!-playing computer? It's now being fed millions of pages of medical information so that it can help physicians do a better job of diagnosing diseases. In another decade, there's a good chance that Watson will be able to do this without any human help at all.
  • Take driverless cars.
  • Most likely, owners of capital would strongly resist higher taxes, as they always have, while workers would be unhappy with their enforced idleness. Still, the ancient Romans managed to get used to it—with slave labor playing the role of robots—and we might have to, as well.
  • There will be no place to go but the unemployment lin
  • we'll need to let go of some familiar convictions. Left-leaning observers may continue to think that stagnating incomes can be improved with better education and equality of opportunity. Conservatives will continue to insist that people without jobs are lazy bums who shouldn't be coddled. They'll both be wrong.
  • Corporate executives should worry too. For a while, everything will seem great for them: Falling labor costs will produce heftier profits and bigger bonuses. But then it will all come crashing down. After all, robots might be able to produce goods and services, but they can't consume them
  • we'll probably have only a few options open to us. The simplest, because it's relatively familiar, is to tax capital at high rates and use the money to support displaced workers. In other words, as The Economist's Ryan Avent puts it, "redistribution, and a lot of it."
  • would we be happy in a society that offers real work to a dwindling few and bread and circuses for the rest?
  • The next step might be passenger vehicles on fixed routes, like airport shuttles. Then long-haul trucks. Then buses and taxis. There are 2.5 million workers who drive trucks, buses, and taxis for a living, and there's a good chance that, one by one, all of them will be displaced
  •  economist Noah Smith suggests that we might have to fundamentally change the way we think about how we share economic growth. Right now, he points out, everyone is born with an endowment of labor by virtue of having a body and a brain that can be traded for income. But what to do when that endowment is worth a fraction of what it is today? Smith's suggestion: "Why not also an endowment of capital? What if, when each citizen turns 18, the government bought him or her a diversified portfolio of equity?"
  • In simple terms, if owners of capital are capturing an increasing fraction of national income, then that capital needs to be shared more widely if we want to maintain a middle-class society.
  • it's time to start thinking about our automated future in earnest. The history of mass economic displacement isn't encouraging—fascists in the '20s, Nazis in the '30s—and recent high levels of unemployment in Greece and Italy have already produced rioting in the streets and larger followings for right-wing populist parties. And that's after only a few years of misery.
  • When the robot revolution finally starts to happen, it's going to happen fast, and it's going to turn our world upside down. It's easy to joke about our future robot overlords—R2-D2 or the Terminator?—but the challenge that machine intelligence presents really isn't science fiction anymore. Like Lake Michigan with an inch of water in it, it's happening around us right now even if it's hard to see
  • A robotic paradise of leisure and contemplation eventually awaits us, but we have a long and dimly lit tunnel to navigate before we get there.
nrashkind

How robots could help us combat pandemics in the future - CNN - 0 views

shared by nrashkind on 29 Mar 20 - No Cached
  • Although many people around the world are practicing social distancing during the coronavirus pandemic, those on the frontlines fighting the virus can't stay home. Experts agree that robots could take over the "dull, dirty and dangerous" jobs humans are currently fulfilling.
  • The panel reminds us that similar plans for robotic assistance were created after the 2015 Ebola outbreak -- but the funding and motivation dropped off.
  • We could have been ready, and now we're trying to play catchup during a pandemic, the researchers said.
  • ...10 more annotations...
  • I don't think that we are ready this time, but hopefully with our collective efforts we can be more ready next time," said Guang-Zhong Yang, founding editor of Science Robotics, one of the authors of the editorial and dean of the Institute of Medical Robotics at Shanghai Jiao Tong University.
  • It's one of many examples showing how robots could prevent human contact from spreading the virus.
  • Yang, who did not test positive for the virus, said some developed robotic technologies are already helping, like robots being used for disinfecting hospitals and surfaces like plastic, metal and glass where the virus can live for up to 72 hours.
  • Yang's work primarily focuses on surgical robots, which can be operated remotely. He thinks they could be used to help clinicians who are treating contagious patients in crowded ICU wards.
  • The authors of the editorial, consisting of robotics experts across the globe, identified key areas where robots could lend assistance that would remove humans from harm's way during a pandemic.
  • This includes disease prevention, diagnosis and screening, patient care and disease management.
  • Remote presence robots could also stand in the place of someone in a meeting, basically providing their presence through a video screen.
  • The pandemic is also highlighting a need for assistance and social robots to help those at home, especially the elderly.
  • And roboticists are realizing that some of the simplest tasks, which carry risk during pandemics, could be assumed by robots.
  • Roboticists don't want to see us in this situation again -- realizing the resources we need in the middle of a problem with limited methods of action.
aleija

The Robot Surgeon Will See You Now - The New York Times - 0 views

  • As he moved the handles — up and down, left and right — the robot mimicked each small motion with its own two arms. Then, when he pinched his thumb and forefinger together, one of the robot’s tiny claws did much the same. This is how surgeons like Dr. Fer have long used robots when operating on patients. They can remove a prostate from a patient while sitting at a computer console across the room.
  • The aim is not to remove surgeons from the operating room but to ease their load and perhaps even raise success rates — where there is room for improvement — by automating particular phases of surgery.
  • Robots can already exceed human accuracy on some surgical tasks, like placing a pin into a bone (a particularly risky task during knee and hip replacements). The hope is that automated robots can bring greater accuracy to other tasks, like incisions or suturing, and reduce the risks that come with overworked surgeons.
  • ...3 more annotations...
  • Five years ago, researchers with the Children’s National Health System in Washington, D.C., designed a robot that could automatically suture the intestines of a pig during surgery.
  • Surgical robots are equipped with cameras that record three-dimensional video of each operation. The video streams into a viewfinder that surgeons peer into while guiding the operation, watching from the robot’s point of view.
  • But this process came with its own asterisk. When the system told the robot where to move, the robot often missed the spot by millimeters. Over months and years of use, the many metal cables inside the robot’s twin arms have stretched and bent in small ways, so its movements were not as precise as they needed to be.
Javier E

AI Is Running Circles Around Robotics - The Atlantic - 0 views

  • Large language models are drafting screenplays and writing code and cracking jokes. Image generators, such as Midjourney and DALL-E 2, are winning art prizes and democratizing interior design and producing dangerously convincing fabrications. They feel like magic. Meanwhile, the world’s most advanced robots are still struggling to open different kinds of doors
  • the cognitive psychologist Steven Pinker offered a pithier formulation: “The main lesson of thirty-five years of AI research,” he wrote, “is that the hard problems are easy and the easy problems are hard.” This lesson is now known as “Moravec’s paradox.”
  • The paradox has grown only more apparent in the past few years: AI research races forward; robotics research stumbles. In part that’s because the two disciplines are not equally resourced. Fewer people work on robotics than on AI.
  • ...7 more annotations...
  • In theory, a robot could be trained on data drawn from computer-simulated movements, but there, too, you must make trade-offs
  • Jang compared computation to a tidal wave lifting technologies up with it: AI is surfing atop the crest; robotics is still standing at the water’s edge.
  • Whatever its causes, the lag in robotics could become a problem for AI. The two are deeply intertwined
  • But the biggest obstacle for roboticists—the factor at the core of Moravec’s paradox—is that the physical world is extremely complicated, far more so than languag
  • Some researchers are skeptical that a model trained on language alone, or even language and images, could ever achieve humanlike intelligence. “There’s too much that’s left implicit in language,” Ernest Davis, a computer scientist at NYU, told me. “There’s too much basic understanding of the world that is not specified.” The solution, he thinks, is having AI interact directly with the world via robotic bodies. But unless robotics makes some serious progress, that is unlikely to be possible anytime soon.
  • For years already, engineers have used AI to help build robots. In a more extreme, far-off vision, super-intelligent AIs could simply design their own robotic body. But for now, Finn told me, embodied AI is still a ways off. No android assassins. No humanoid helpers.
  • Set in the context of our current technological abilities, HAL’s murderous exchange with Dave from 2001: A Space Odyssey would read very differently. The machine does not refuse to help its human master. It simply isn’t capable of doing so.“Open the pod bay doors, HAL.”“I’m sorry, Dave. I’m afraid I can’t do that.”
Javier E

Robots are performing eyelash extensions using AI technology - The Washington Post - 0 views

  • These devices are only becoming more commonplace. Davis has another appointment to get a gel manicure from a San Francisco salon with robots that paint nails in 10 minutes.
  • Davis went back to LUUM during her one-hour lunch break recently to get a fill-in lash extension. “I’m busy and this is really convenient because I know I’ll be able to go in and out. I think it’s more effective,” she said.
  • Eyelash artists are constantly hunching over to work on clients. “You don’t see many lash artists with gray hair and that’s because it’s an extremely tough job. The robots kind of remove the drudgery,” said Harding, LUUM’s CEO.
  • ...3 more annotations...
  • Shephard, the lash association president, said the robots are an exciting addition to the industry. “Robot services may be an option for those who currently don’t get lash extensions because of the time and cost of the human experience. And robots can target a different type of clientele, which may expand and help our industry grow even further,” Shephard said. “There’s space for both of us.”
  • “Robots can help by making certain things better,” Hauser said. He said human lash artists may be less precise than robots and clients could end up with glue in their eyes. “Precision and repetition are the things the robots are really good for.”
  • the customer’s face, and another is focused on the station where the robot picks up the extension.
Javier E

Evidence That Robots Are Winning the Race for American Jobs - The New York Times - 0 views

  • that paper was a conceptual exercise. The new one uses real-world data — and suggests a more pessimistic future. The researchers said they were surprised to see very little employment increase in other occupations to offset the job losses in manufacturing. That increase could still happen, they said, but for now there are large numbers of people out of work, with no clear path forward — especially blue-collar men without college degrees.
  • “The conclusion is that even if overall employment and wages recover, there will be losers in the process, and it’s going to take a very long time for these communities to recover,” Mr. Acemoglu said.
  • “If you’ve worked in Detroit for 10 years, you don’t have the skills to go into health care,” he said. “The market economy is not going to create the jobs by itself for these workers who are bearing the brunt of the change.”
  • ...7 more annotations...
  • The study analyzed the effect of industrial robots in local labor markets in the United States. Robots are to blame for up to 670,000 lost manufacturing jobs between 1990 and 2007, it concluded, and that number will rise because industrial robots are expected to quadruple.
  • The paper adds to the evidence that automation, more than other factors like trade and offshoring that President Trump campaigned on, has been the bigger long-term threat to blue-collar jobs. The researchers said the findings — “large and robust negative effects of robots on employment and wages” — remained strong even after controlling for imports, offshoring, software that displaces jobs, worker demographics and the type of industry.
  • Robots affected both men’s and women’s jobs, the researchers found, but the effect on male employment was up to twice as big.
  • The data doesn’t explain why, but Mr. Acemoglu had a guess: Women are more willing than men to take a pay cut to work in a lower-status field.
  • The findings fuel the debate about whether technology will help people do their jobs more efficiently and create new ones, as it has in the past, or eventually displace humans.
  • Mr. Restrepo said the problem might be that the new jobs created by technology are not in the places that are losing jobs, like the Rust Belt. “I still believe there will be jobs in the years to come, though probably not as many as we have today,” he said. “But the data have made me worried about the communities directly exposed to robots
  • The next question is whether the coming wave of technologies — like machine learning, drones and driverless cars — will have similar effects, but on many more people.
Javier E

What Jobs Will the Robots Take? - Derek Thompson - The Atlantic - 0 views

  • Nearly half of American jobs today could be automated in "a decade or two," according to a new paper
  • The question is: Which half?
  • Where do machines work better than people?
  • ...14 more annotations...
  • in the past 30 years, software and robots have thrived at replacing a particular kind of occupation: the average-wage, middle-skill, routine-heavy worker, especially in manufacturing and office admin. 
  • the next wave of computer progress will continue to shred human work where it already has: manufacturing, administrative support, retail, and transportation. Most remaining factory jobs are "likely to diminish over the next decades," they write. Cashiers, counter clerks, and telemarketers are similarly endangered
  • here's a chart of the ten jobs with a 99-percent likelihood of being replaced by machines and software. They are mostly routine-based jobs (telemarketing, sewing) and work that can be solved by smart algorithms (tax preparation, data entry keyers, and insurance underwriters)
  • I've also listed the dozen jobs they consider least likely to be automated. Health care workers, people entrusted with our safety, and management positions dominate the list.
  • If you wanted to use this graph as a guide to the future of automation, your upshot would be: Machines are better at rules and routines; people are better at directing and diagnosing. But it doesn't have to stay that way.
  • Although the past 30 years have hollowed out the middle, high- and low-skill jobs have actually increased, as if protected from the invading armies of robots by their own moats
  • Higher-skill workers have been protected by a kind of social-intelligence moat. Computers are historically good at executing routines, but they're bad at finding patterns, communicating with people, and making decisions, which is what managers are paid to do
  • lower-skill workers have been protected by the Moravec moat. Hans Moravec was a futurist who pointed out that machine technology mimicked a savant infant: Machines could do long math equations instantly and beat anybody in chess, but they can't answer a simple question or walk up a flight of stairs. As a result, menial work done by people without much education (like home health care workers, or fast-food attendants) have been spared, too.
  • robots are finally crossing these moats by moving and thinking like people. Amazon has bought robots to work its warehouses. Narrative Science can write earnings summaries that are indistinguishable from wire reports. We can say to our phones I'm lost, help and our phones can tell us how to get home. 
  • In a decade, the idea of computers driving cars went from impossible to boring.
  • The first wave showed that machines are better at assembling things. The second showed that machines are better at organization things. Now data analytics and self-driving cars suggest they might be better at pattern-recognition and driving. So what are we better at?
  • One conclusion to draw from this is that humans are, and will always be, superior at working with, and caring for, other humans. In this light, automation doesn't make the world worse. Far from it: It creates new opportunities for human ingenuity.  
  • But robots are already creeping into diagnostics and surgeries. Schools are already experimenting with software that replaces teaching hours. The fact that some industries have been safe from automation for the last three decades doesn't guarantee that they'll be safe for the next one.
  • It would be anxious enough if we knew exactly which jobs are next in line for automation. The truth is scarier. We don't really have a clue.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

A Plan in Case Robots Take the Jobs: Give Everyone a Paycheck - The New York Times - 0 views

  • In Robot America, most manual laborers will have been replaced by herculean bots. Truck drivers, cabbies, delivery workers and airline pilots will have been superseded by vehicles that do it all. Doctors, lawyers, business executives and even technology columnists for The New York Times will have seen their ranks thinned by charming, attractive, all-knowing algorithms.
  • U.B.I., and it goes like this: As the jobs dry up because of the spread of artificial intelligence, why not just give everyone a paycheck?
  • While U.B.I. has been associated with left-leaning academics, feminists and other progressive activists, it has lately been adopted by a wider range of thinkers, including some libertarians and conservatives. It has also gained support among a cadre of venture capitalists in New York and Silicon Valley, the people most familiar with the potential for technology to alter modern work.
  • ...15 more annotations...
  • tech supporters of U.B.I. consider machine intelligence to be something like a natural bounty for society: The country has struck oil, and now it can hand out checks to each of its citizens.
  • These supporters argue machine intelligence will produce so much economic surplus that we could collectively afford to liberate much of humanity from both labor and suffering.
  • As computers perform more of our work, we’d all be free to become artists, scholars, entrepreneurs or otherwise engage our passions in a society no longer centered on the drudgery of daily labor.
  • “For a couple hundred years, we’ve constructed our entire world around the need to work. Now we’re talking about more than just a tweak to the economy — it’s as foundational a departure as when we went from an agrarian society to an industrial one.”
  • “I think it’s a bad use of a human to spend 20 years of their life driving a truck back and forth across the United States,” Mr. Wenger said. “That’s not what we aspire to do as humans — it’s a bad use of a human brain — and automation and basic income is a development that will free us to do lots of incredible things that are more aligned with what it means to be human.”
  • There is an urgency to the techies’ interest in U.B.I. They argue that machine intelligence reached an inflection point in the last couple of years, and that technological progress now looks destined to change how most of the world works.
  • Wage growth is sluggish, job security is nonexistent, inequality looks inexorable, and the ideas that once seemed like a sure path to a better future (like taking on debt for college) are in doubt. Even where technology has created more jobs, like the so-called gig economy work created by services like Uber, it has only added to our collective uncertainty about the future of work.
  • people are looking at these trends and realizing these questions about the future of work are more real and immediate than they guessed,”
  • A cynic might see the interest of venture capitalists in U.B.I. as a way for them to atone for their complicity in the tech that might lead to permanent changes in the global economy.
  • they don’t see U.B.I. merely as a defense of the current social order. Instead they see automation and U.B.I. as the most optimistic path toward wider social progress.
  • When you give everyone free money, what do people do with their time? Do they goof off, or do they try to pursue more meaningful pursuits? Do they become more entrepreneurial? How would U.B.I. affect economic inequality? How would it alter people’s psychology and mood? Do we, as a species, need to be employed to feel fulfilled, or is that merely a legacy of postindustrial capitalism?
  • Proponents say these questions will be answered by research, which in turn will prompt political change. For now, they argue the proposal is affordable if we alter tax and welfare policies to pay for it, and if we account for the ways technological progress in health care and energy will reduce the amount necessary to provide a basic cost of living.
  • They also note that increasing economic urgency will push widespread political acceptance of the idea. “There’s a sense that growing inequality is intractable, and that we need to do something about it,
  • Andrew L. Stern, a former president of the Service Employees International Union, who is working on a book about U.B.I., compared the feeling of the current anxiety around jobs to a time of war. “I grew up during the Vietnam War, and my parents were antiwar for one reason: I could be drafted,” he said.
  • Today, as people across all income levels become increasingly worried about how they and their children will survive in tech-infatuated America, “we are back to the Vietnam War when it comes to jobs,
Javier E

Will You Lose Your Job to a Robot? Silicon Valley Is Split - NYTimes.com - 0 views

  • The question for Silicon Valley is whether we’re heading toward a robot-led coup or a leisure-filled utopia.
  • nterviews with 2,551 people who make, research and analyze new technology. Most agreed that robotics and artificial intelligence would transform daily life by 2025, but respondents were almost evenly split about what that might mean for the economy and employment.
  • techno-optimists. They believe that even though machines will displace many jobs in a decade, technology and human ingenuity will produce many more, as happened after the agricultural and industrial revolutions. The meaning of “job” might change, too, if people find themselves with hours of free time because the mundane tasks that fill our days are automated.
  • ...8 more annotations...
  • The other half agree that some jobs will disappear, but they are not convinced that new ones will take their place, even for some highly skilled workers. They fear a future of widespread unemployment, deep inequality and violent uprisings — particularly if policy makers and educational institutions don’t step in.
  • We’re going to have to come to grips with a long-term employment crisis and the fact that — strictly from an economic point of view, not a moral point of view — there are more and more ‘surplus humans.'  ”
  • “The degree of integration of A.I. into daily life will depend very much, as it does now, on wealth. The people whose personal digital devices are day-trading for them, and doing the grocery shopping and sending greeting cards on their behalf, are people who are living a different life than those who are worried about missing a day at one of their three jobs due to being sick, and losing the job and being unable to feed their children.”
  • “Only the best-educated humans will compete with machines. And education systems in the U.S. and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorize what is told to them, preparing them for life in a 20th century factory.”
  • “We hardly dwell on the fact that someone trying to pick a career path that is not likely to be automated will have a very hard time making that choice. X-ray technician? Outsourced already, and automation in progress. The race between automation and human work is won by automation.”
  • “Robotic sex partners will be commonplace. … The central question of 2025 will be: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?'  ”
  • “Employment will be mostly very skilled labor — and even those jobs will be continuously whittled away by increasingly sophisticated machines. Live, human salespeople, nurses, doctors, actors will be symbols of luxury, the silk of human interaction as opposed to the polyester of simulated human contact.”
  • The biggest exception will be jobs that depend upon empathy as a core capacity — schoolteacher, personal service worker, nurse. These jobs are often those traditionally performed by women. One of the bigger social questions of the mid-late 2020s will be the role of men in this world.”
Javier E

When the Robots Take Our Jobs Majoring in STEM fields might teach students how to buil... - 0 views

  • Majoring in STEM fields might teach students how to build robots, but studying history will teach them what to do when the robots take their jobs.
  • Today, technological advances threaten to make human labor obsolete across a broad range of skilled professions, as even the bright young entrepreneurs and engineers being nurtured by our universities are sure to discover.
  • Americans looking to assist the DPs of the twenty-first century might benefit from learning about the experiences of rural black southerners in the 1960s, whose responses to displacement sought to hold political leaders accountable, ensure a fairer distribution of resources, and empower laid off workers to craft creative solutions to their problems.
  • ...6 more annotations...
  • Black southerners understood that unemployment and poverty were not caused solely by changes in the market or the laws of supply and demand.
  • Before the 1960s, their chief concern was how to get black people to work for them. Now, the problem was what to do with workers whose labor was no longer needed—and who could now vote. In the majority-black plantation counties, landowners feared the election of social justice advocates who would increase taxes on the wealthy to pay for job training programs, improved education, infrastructure spending, and other investments that could help unemployed workers adjust to the new economy.
  • Plantation owners’ preferred solution to this dilemma was for African Americans to leave the region, and they tried to discourage unemployed people from remaining in their communities by cutting public assistance programs, blocking economic development efforts, and opposing antipoverty projects initiated by the federal government’s War on Poverty.
  • With support from new federal agencies created to address economic inequities, southern social justice activists experimented with innovative methods for alleviating unemployment in the late 1960s and early 1970s
  • All of these projects were open to poor white people as well as black Americans, but many white southerners were reluctant to participate. Supporters of antipoverty programs faced violent attacks and economic reprisals by the same white supremacist groups that resisted the civil rights movement, and opponents portrayed the War on Poverty as a ploy to transfer wealth from hardworking white Americans to undeserving black Americans in an effort to discredit it. As one federal official observed, white southerners were “led to believe that Poverty Programs are for Negroes only. . . . The poor white man is not encouraged to take advantage of his Government’s efforts to lift him out of the pits of poverty.”
  • Over the next several decades, the idea that government assistance was for lazy black people and that self-respecting “real” Americans could succeed through individualism and hard work seeped into the national political discourse. Segregationist presidential contender George Wallace laid the groundwork for this shift in his campaigns of 1964 and 1968 by equating efforts to ensure racial equality with federal tyranny
Javier E

What Elon Musk's 'Age of Abundance' Means for the Future of Capitalism - WSJ - 0 views

  • When it comes to the future, Elon Musk’s best-case scenario for humanity sounds a lot like Sci-Fi Socialism.
  • “We will be in an age of abundance,” Musk said this month.
  • Sunak said he believes the act of work gives meaning, and had some concerns about Musk’s prediction. “I think work is a good thing, it gives people purpose in their lives,” Sunak told Musk. “And if you then remove a large chunk of that, what does that mean?”
  • ...20 more annotations...
  • Part of the enthusiasm behind the sky-high valuation of Tesla, where he is chief executive, comes from his predictions for the auto company’s abilities to develop humanoid robots—dubbed Optimus—that can be deployed for everything from personal assistants to factory workers. He’s also founded an AI startup, dubbed xAI, that he said aims to develop its own superhuman intelligence, even as some are skeptical of that possibility. 
  • Musk likes to point to another work of Sci-Fi to describe how AI could change our world: a series of books by the late-, self-described-socialist author Iain Banks that revolve around a post-scarcity society that includes superintelligent AI. 
  • That is the question.
  • “We’re actually going to have—and already do have—a massive shortage of labor. So, I think we will have not people out of work but actually still a shortage of labor—even in the future.” 
  • Musk has cast his work to develop humanoid robots as an attempt to solve labor issues, saying there aren’t enough workers and cautioning that low birthrates will be even more problematic. 
  • Instead, Musk predicts robots will be taking jobs that are uncomfortable, dangerous or tedious. 
  • A few years ago, Musk declared himself a socialist of sorts. “Just not the kind that shifts resources from most productive to least productive, pretending to do good, while actually causing harm,” he tweeted. “True socialism seeks greatest good for all.”
  • “It’s fun to cook food but it’s not that fun to wash the dishes,” Musk said this month. “The computer is perfectly happy to wash the dishes.”
  • In the near term, Goldman Sachs in April estimated generative AI could boost the global gross domestic product by 7% during the next decade and that roughly two-thirds of U.S. occupations could be partially automated by AI. 
  • Vinod Khosla, a prominent venture capitalist whose firm has invested in the technology, predicted within a decade AI will be able to do “80% of 80%” of all jobs today.
  • “I believe the need to work in society will disappear in 25 years for those countries that adapt these technologies,” Khosla said. “I do think there’s room for universal basic income assuring a minimum standard and people will be able to work on the things they want to work on.” 
  • Forget universal basic income. In Musk’s world, he foresees something more lush, where most things will be abundant except unique pieces of art and real estate. 
  • “We won’t have universal basic income, we’ll have universal high income,” Musk said this month. “In some sense, it’ll be somewhat of a leveler or an equalizer because, really, I think everyone will have access to this magic genie.” 
  • All of which kind of sounds a lot like socialism—except it’s unclear who controls the resources in this Muskism society
  • “Digital super intelligence combined with robotics will essentially make goods and services close to free in the long term,” Musk said
  • “What is an economy? An economy is GDP per capita times capita.” Musk said at a tech conference in France this year. “Now what happens if you don’t actually have a limit on capita—if you have an unlimited number of…people or robots? It’s not clear what meaning an economy has at that point because you have an unlimited economy effectively.”
  • In theory, humanity would be freed up for other pursuits. But what? Baby making. Bespoke cooking. Competitive human-ing. 
  • “Obviously a machine can go faster than any human but we still have humans race against each other,” Musk said. “We still enjoy competing against other humans to, at least, see who was the best human.”
  • Still, even as Musk talks about this future, he seems to be grappling with what it might actually mean in practice and how it is at odds with his own life. 
  • “If I think about it too hard, it, frankly, can be dispiriting and demotivating, because…I put a lot of blood, sweat and tears into building companies,” he said earlier this year. “If I’m sacrificing time with friends and family that I would prefer but then ultimately the AI can do all these things, does that make sense?”“To some extent,” Musk concluded, “I have to have a deliberate suspension of disbelief in order to remain motivated.”
Javier E

The Sidney Awards, Part 2 - NYTimes.com - 0 views

  • a piece in Wired called “Better Than Human: Why Robots Will — And Must — Take Our Jobs.” He asserted that robots will soon be performing 70 percent of existing human jobs.
  • “Kludgeocracy in America.” While we’ve been having a huge debate about the size of government, the real problem, he writes, is that the growing complexity of government has made it incoherent. The Social Security system was simple. But now we have a maze of saving mechanisms
  • One of the reasons we have such complex structures, Teles argues, is that Americans dislike government philosophically, but like government programs operationally. Rather than supporting straightforward government programs, they support programs in which public action is hidden behind a morass of tax preferences, obscure regulations and intricate litigation.
malonema1

Trump Lifts Refugee Suspension, but 11 Countries Face More Review - The New York Times - 0 views

  • WASHINGTON — President Trump signed an executive order on Tuesday resuming the admission of refugees to the United States under tighter security screening. But administration officials said they will subject 11 unidentified countries to another 90-day review for potential threats.The order lifted a suspension on new refugee admissions that Mr. Trump first imposed shortly after taking office in January. At the time, it was part of a broader effort to limit the flow of foreigners admitted to the United States on the grounds of security, an initiative that has generated one of the sharpest legal and political debates of his nine-month-old presidency.
  • It was not clear whether the new screening procedures would significantly diminish the chances for many applicants. While refugees who were vetted and approved before Mr. Trump took office have been allowed into the country this year, no new applications have been processed or approved since June. Advertisement Continue reading the main story Mr. Trump has already moved aggressively to scale back the nation’s refugee program, imposing a limit of 45,000 — the lowest in more than three decades — on the number of people fleeing persecution that can be resettled in the United States over the fiscal year that started on Oct. 1. The action announced on Tuesday, while restarting the admissions process halted earlier this year, could result in new roadblocks or even outright bans for refugees from the 11 countries, potentially narrowing the pool even further.
  • The White House said that both reviews — the one that has been completed and the new, 90-day one — both aim to secure the United States from a clear danger from terrorist groups seeking to infiltrate the country. “The review process for refugees” required by the president “has made our nation safer,” the new order said.
  • ...1 more annotation...
  • Justice Sonia Sotomayor dissented, saying that she would have simply dismissed the case and allowed the appeals court decision to remain on the books.Erasing that precedent may have implications for the new challenge to the September order. Last week, in blocking the new order, Judge Derrick K. Watson, of the Federal District Court in Honolulu, relied heavily on the Ninth Circuit’s decision. Continue reading the main story We’re interested in your feedback on this page. Tell us what you think. From Our Advertisers campaign: wm_oct_sale_urgency_1017, creative: Adaptive Articles, source: optimizely, creator: Keith McKellar
Javier E

Fewer Americans are working. Don't blame immigrants or food stamps. - The Washington Post - 0 views

  • The share of Americans with jobs dropped 4.5 percentage points from 1999 to 2016 — amounting to about 6.8 million fewer workers in 2016.
  • Between 50 and 70 percent of that decline probably was due to an aging population.
  • pretty much all the missing jobs are accounted for.
  • ...17 more annotations...
  • trade with China and the rise of robots are to blame for millions of the missing jobs.
  • Other popular scapegoats, such as immigration, food stamps and Obamacare, did not even move the needle.
  • The era of vanishing jobs happened alongside one of the most unusual, disruptive eras in modern economic history — China’s accession to the World Trade Organization in 2001 and its subsequent rise to the top of the global export market.
  • this competition cost the economy about 2.65 million jobs over the period.
  • Automation also seems to have cost more jobs than it created. Guided by research showing that each robot takes the jobs of about 5.6 workers and that 250,475 robots had been added since 1999, the duo estimated that robots cost the economy another 1.4 million workers.
  • Abraham and Kearney used previous research into how teens and adults respond to rising wages to produce a high-end estimate of the impact of minimum wages over this period. Other recent research has found either a small effect or no effect. In the end, they combined those figures to find that about 0.49 million workers were lost.
  • the labor force shrank by about 0.36 million as an increasing number of workers drew disability benefits.
  • The paper’s most striking finding is not, however, speculation on idle American youths. It is that many of the topics that dominate political discourse about the labor market — such as immigration, food stamps and Obamacare — are unlikely to bring back lost jobs.
  • There were about 6.5 million former prisoners in the United States between the ages of 18 and 64 in 2014, according to the best available data. Assume that 60 percent of them served time as a result of policies implemented since the 1990s, account for their ages, time served, and pre-prison earnings, and you get a conservative estimate of 0.32 million lost jobs.
  • What did not reduce employment
  • Immigration Most research indicates that immigration does not reduce native employment rates.
  • Food stamps (Supplemental Nutrition Assistance Program) SNAP benefits average about $4.11 per person per day. Able-bodied adults are generally cut off from benefits unless they are working.
  • The Affordable Care Act Obamacare went into effect in 2014 and has not had a noticeable impact on jobs to date.
  • Working spouses who allow men to stay home While this is a popular theory, the share of men who are not in the labor force but had a working spouse actually fell slightly between 1999 and 2015
  • other explanations are out there, pushing and pulling the estimates in either direction.
  • The economists estimated that roughly 0.15 million people were not working because of the expansion of a disability insurance program run by the Department of Veterans Affairs.
  • Instead, policymakers should be focusing on the forces that took those jobs in the first place: import competition, automation, incarceration and disability insurance.
Javier E

How one university changed overnight when it let 25 semiautonomous robots roam its camp... - 0 views

  • After witnessing his first robot food delivery outside Commonwealth Hall, John Farina — the awestruck associate professor of religious studies watching Shamor Williams retrieve his lunch — said he is concerned about the machines' impact on human interaction. In class, he said, students engage with one another far less.
  • "They get out of class, they whip out the cellphones and bump shoulders with one another,” he said. “I could give the same exam year after year and not worry it’s going to be passed around.” New technology is often accompanied by new benefits, he said, but society needs to remain keenly aware of what it strips away, especially in an educational setting. “I’m not anti-technology,” he added. “But you want to create a community of learning, a face-to-face experience of sharing ideas and carrying on discussion about those ideas. That’s why we’re here.
Javier E

Opinion | Three Things Americans Should Learn From Xi's China - The New York Times - 0 views

  • Creating a Chinese version of the World Bank, Mr. Xi inaugurated the Asian Infrastructure Investment Bank.
  • Instead of the American dream, he speaks of the “Chinese dream,” which describes the collective pride that people feel when they overcome a century of disorder and colonial humiliation to reclaim their status as a great power.
  • I asked half a dozen scholars who study China what lessons Americans should draw from Mr. Xi’s tenure so far. Here’s a summary of what they told me.
  • ...21 more annotations...
  • In the absence of elections, Communist Party officials in China rise up the ranks based on how well they deliver on the party’s priorities, at least in theory. For years, the top priority was economic growth
  • Local officials plowed money into the highways, ports and power plants that manufacturers needed, turning China into the world’s factory.
  • Under Mr. Xi, government priorities have shifted toward self-sufficiency and the use of industrial robots, something that Chinese leaders believe is critical to escaping the middle-income trap, in which a country can no longer compete in low-wage manufacturing because of rising wages but has not yet made the leap to the value-added products of high-income countries.
  • some Chinese companies purchased robots that don’t work well and exaggerated their success to get government subsidies and curry favor with politicians. Directives from party officials with little expertise in robotics fetishize machines beyond their actual usefulness.
  • Those unskilled laborers — who will increasingly be replaced by robots, according to China’s grand strategy — present an economic challenge and a threat to political stability
  • What many Chinese businesses wanted most, she said, was “invisible infrastructure”: a predictable judicial system, fair access to bank credit and land, and regulations that are applied without regard to political connections
  • Her findings, reported in detail in “The Gilded Cage: Techno-State Capitalism in China,” which will be published next fall, suggest that Beijing’s pronouncements about amazing technological advancement should be viewed with a touch of skepticism.
  • Mr. Xi had a privileged childhood as the son of a top Communist Party official. But the Cultural Revolution shattered that sheltered life; he was sent to a remote village for seven years, where he did hard labor and slept in a hillside cave home. As a result, he can claim a familiarity with rural people and rural problems that few world leaders can even imagine.
  • One of Mr. Xi’s most celebrated campaigns has been a vow to stamp out extreme poverty, a tacit acknowledgment that China’s economic miracle has left hundreds of millions of rural farmers behind
  • Some corporate managers complained that government subsidies often flowed to politically connected firms and were wasted, while others grumbled that government directives were unpredictable and ill informed.
  • Only 30 percent of working Chinese adults have high school diplomas, although 80 percent of young people are getting them now, according to Scott Rozelle, a co-author of “Invisible China: How the Urban-Rural Divide Threatens China’s Rise.”
  • more than 600 million Chinese people scrape by on the equivalent of $140 per month.
  • Last year, Mr. Xi declared “complete victory” in eradicating extreme poverty in China, but skepticism about his success abounds. Some experts on China report that local officials gave out cash to rural families — one-time payments that got them temporarily over the poverty line — instead of initiating badly needed structural reforms.
  • “Rural Chinese in many ways are like the lowest class in a policy-driven caste system,” Mr. Rozelle told me. Nevertheless, even a flawed program to address rural poverty is better than no program at all.
  • He set out to save his rudderless Communist Party by cracking down on graft and bringing wayward nouveaux riches back into the fold by recruiting them as party members. He ordered chief executives to contribute more toward “common prosperity” and showed what could happen to those who didn’t toe the party line.
  • Mr. Xi’s crackdown went too far. Increasingly, foreign investors and Chinese entrepreneurs are fleeing. Coupled with a draconian zero-Covid strategy, Mr. Xi’s policies have sent the economy into a tailspin.
  • More worrisome still is the return of an atmosphere of fear and sycophancy not seen since Chairman Mao’s time. A businessman who was critical of Mr. Xi was sent to prison for 18 years. The era of relative openness to intellectual debate and foreign ideas appears to have come to an end.
  • id another despot like Mao, have gone out the window so Mr. Xi can have more time in power. Mr. Xi has been called a modern-day emperor, the chairman of everything and the most powerful man in the world
  • Yuhua Wang, a political scientist at Harvard who is author of the book “The Rise and Fall of Imperial China,” released this month. Mr. Wang studied 2,000 years of Chinese history and discovered, somewhat counterintuitively, that China’s central government has always been the weakest under its longest-serving rulers.
  • Emperors, he explains, have always stayed in power by weakening the elites who might have overthrown them — the very people who are capable of building a strong and competent government.
  • “One can argue that he has good intentions,” Mr. Wang told me of Mr. Xi. But the tactics he has used to maintain power — crushing critics, micromanaging businesses, whipping up nationalist fervor and walling China off from the world — may end up weakening China in the end.
Javier E

Robots and Robber Barons - NYTimes.com - 0 views

  • profits have surged as a share of national income, while wages and other labor compensation are down. The pie isn’t growing the way it should — but capital is doing fine by grabbing an ever-larger slice, at labor’s expense.
  • Increasingly, profits have been rising at the expense of workers in general, including workers with the skills that were supposed to lead to success in today’s economy.
  • there are two plausible explanations, both of which could be true to some extent. One is that technology has taken a turn that places labor at a disadvantage; the other is that we’re looking at the effects of a sharp increase in monopoly power. Think of these two stories as emphasizing robots on one side, robber barons on the other.
  • ...4 more annotations...
  • similar stories are playing out in many fields, including services like translation and legal research. What’s striking about their examples is that many of the jobs being displaced are high-skill and high-wage; the downside of technology isn’t limited to menial workers.
  • can innovation and progress really hurt large numbers of workers, maybe even workers in general? I often encounter assertions that this can’t happen. But the truth is that it can, and serious economists have been aware of this possibility for almost two centuries. The early-19th-century economist David Ricardo is best known for the theory of comparative advantage, which makes the case for free trade; but the same 1817 book in which he presented that theory also included a chapter on how the new, capital-intensive technologies of the Industrial Revolution could actually make workers worse off, at least for a while — which modern scholarship suggests may indeed have happened for several decades.
  • increasing business concentration could be an important factor in stagnating demand for labor, as corporations use their growing monopoly power to raise prices without passing the gains on to their employees.
  • that shift is happening — and it has major implications. For example, there is a big, lavishly financed push to reduce corporate tax rates; is this really what we want to be doing at a time when profits are surging at workers’ expense? Or what about the push to reduce or eliminate inheritance taxes; if we’re moving back to a world in which financial capital, not skill or education, determines income, do we really want to make it even easier to inherit wealth?
abbykleman

The future of home delivery: Pedestrians and robots will soon share the pavements | The... - 1 views

  •  
    WHO would be a delivery driver? As if a brutal schedule, grumpy motorists, lurking traffic wardens and the risk of an aching back were not bad enough, they now face the fear of robots taking their jobs.
jongardner04

Ted Cruz Keeps Up Pressure on Donald Trump; Bernie Sanders Takes 2 on 'Super Saturday' ... - 0 views

  • Senator Ted Cruz scored decisive wins in the Kansas and Maine caucuses on Saturday, demonstrating his enduring appeal among conservatives as he tried to reel in Donald J. Trump’s significant lead in the Republican presidential race.
  • In Democratic contests, Hillary Clinton scored a commanding victory in Louisiana, the state with the most delegates in play on Saturday, while Senator Bernie Sanders won the Nebraska and Kansas caucuses, according to The Associated Press. The results did not alter the contours of a race in which Mrs. Clinton maintains a significant delegate lead.
  • The biggest stakes were on the Republican side, and the voters sensed it; turnout in Kansas, for example, was more than double that of 2012. Mr. Cruz won 48 percent of the vote there, while Mr. Trump received 23 percent, Senator Marco Rubio of Florida won 17 percent and Gov. John Kasich of Ohio won 11 percent. The results were tighter in Maine, but Mr. Cruz still easily defeated Mr. Trump there by 13 percentage points. With Mr. Trump’s victories coming by smaller margins, Mr. Cruz had the biggest delegate haul of the day, appearing to net at least 15 more than the front-runner.
  • ...5 more annotations...
  • “I think what it represents is Republicans coalescing, saying it would be a disaster for Donald Trump to be our nominee and we’re going to stand behind the strongest conservative in the race,” Mr. Cruz told reporters in Coeur d’Alene, Idaho, one of four states with Republican contests on Tuesday.
  • Mr. Trump’s losses underlined his continued vulnerability in states that hold time-intensive caucuses: He has lost five of seven such contests. He has performed far better in states holding primaries, which require less organization, and some of which also allow Democrats and independents to vote in Republican races.
  • The results suggested that a substantial number of Republicans were still uneasy about Mr. Trump: He finished above 40 percent in just one state. It was an indication that the growing campaign to deny Mr. Trump the nomination may not be a pointless exercise. The Stop Trump campaign was joined last week by Mitt Romney, who delivered a blistering attack on the Republican front-runner, portraying him as a threat to the party and the nation. And Mr. Trump reinforced questions about his candidacy at a debate on Thursday by making a barely veiled reference to his penis.
  • Whether he has incurred significant damage will be better known on Tuesday, if Mr. Kasich and Mr. Cruz can compete in Michigan and Mr. Cruz can threaten him in Mississippi.
  • Mr. Trump’s comments about building a wall along the border with Mexico and about illegal immigrants causing crime have drawn demonstrations almost everywhere he goes, and that was true in Wichita, too. Trump supporters in the caucus line engaged in shouting with several dozen protesters, many of them Hispanics, who make up 20 percent of the city’s population. Trucks with Mexican flags hanging out the windows and Latin music blaring from the speakers cruised slowly past the line.
1 - 20 of 173 Next › Last »
Showing 20 items per page