Skip to main content

Home/ History Readings/ Group items tagged Apple

Rss Feed Group items tagged

Javier E

Europe's energy crisis may get a lot worse - 0 views

  • It was only at the end of April that Russia cut gas supplies to Poland and Bulgaria, the first two victims of its energy-pressure campaign. But overall gas shipments are at less than one-third the level they were just a year ago. In mid-June, shipments through Nord Stream 1 were cut by 75 percent; in July, they were cut again.
  • “It is wartime,” Tatiana Mitrova, a research fellow at Columbia, told her colleague Jason Bordoff, a former adviser to Barack Obama, on an eye-opening recent episode of the podcast “Columbia Energy Exchange.”
  • I think there’s been a gradual and growing recognition that we are headed into the worst global energy crisis at least since the 1970s and perhaps longer than that.
  • ...32 more annotations...
  • “This is something that European politicians and consumers didn’t want to admit for quite a long time. It sounds terrible, but that’s the reality. In wartime the economy is mobilized. The decisions are made by the governments, not by the free market. This is the case for Europe this winter,” she said, adding that we may see forced rationing, price controls, the suspension of energy markets and shutdowns of whole industrial sectors. “We are not actually talking about extremely high prices, but we are talking about physical absence of energy resources in certain parts of Europe.”
  • It’s increasingly clear that Vladimir Putin is using gas as a weapon and trying to supply just enough gas to Europe to keep Europe in a perpetual state of panic about its ability to weather the coming winter.
  • Europe has been finding all the supplies that it can, but governments are realizing that’s not going to be sufficient. There are going to have to be efforts taken to curb demand as well and to prepare for the possibility of really severe energy rationing this winter.
  • If things become really severe this winter, I fear that you could see European countries start to look out for themselves rather than one another.
  • I think we could start to see governments saying, “Well, we’re going to restrict exports. We’re going to keep our energy at home.” Everyone starts to just look out for themselves, which I think would be exactly what Putin would hope for.
  • it would be wise to assume that Russia will use every opportunity it can to turn the screws on Europe.
  • I think you would see Russia continue to restrict gas exports and maybe cut them off completely to Europe — and a very cold winter. I think a combination of those two things would mean sky-high energy prices.
  • governments will have to ration energy supplies and decide what’s important.
  • Since Russia invaded Ukraine and maybe until very recently, I’ve had the sense that the European public and the public beyond Europe, as well as policymakers, have been a little bit sleepwalking into a looming crisis.
  • here was some unrealistic optimism about how quickly Europe could do without Russian gas. And we took too long to confront seriously just how bad the numbers would look if the worst came to pass.
  • I think there was continued skepticism that Putin would really cut the gas supply. “It might be declining. It might be a little bit lower,” people thought. “But he’s not really going to shut off the supply.” And I think now everyone’s recognizing that’s a real possibility.
  • Putin has the ability to do a lot of damage to the global economy — and himself, to be sure — if he cuts oil exports as well.
  • There’s no extra oil supply in the world at all, as OPEC Plus reminded everyone by saying: No, we’re not going to be increasing production much, and we can’t even if we wanted to.
  • For all the talk about high gasoline prices and the rhetoric of Putin’s energy price hike, Russia’s oil exports have not fallen very much. If that were to happen — either because the U.S. and Europe forced oil to come off the market to put economic pressure on Putin or because he takes the oil off the market to hurt all of us — oil prices go up enormously.
  • it depends how much he takes off the market. We don’t know exactly. If Russia were to cut its oil exports completely, the prices would just skyrocket — to hundreds of dollars a barrel, I think.
  • That’s because there’s just no extra supply out there today at all. There’s a very little extra supply that the Saudis and the Emiratis can put on the market. And that’s about it. We’ve used the strategic petroleum reserve, and that’s coming to an end in the next several months.
  • We’re heading into a winter where markets might simply not be able to work anymore as the instrument by which you determine supply and demand.
  • if prices just soar to uncontrollable levels, markets are not going to work anymore. You’re going to need governments to step in and decide who gets the scarce energy supplies — how much goes to heating homes, how much goes to industry. There’s going to be a pecking order of different industries, where some industries are deemed more important to the economy than others.
  • a lot of governments in Europe are putting in place those kinds of emergency plans right now.
  • if the worst comes to pass, governments will, by necessity, step in to say: Homes get the natural gas, and parts of industry get dumped. Probably they would set price caps on energy or massively subsidize it. So it’s going to be very painful.
  • Worryingly for the European economy, this may mean that factories that can’t switch fuels will go dormant.
  • Today, before winter comes, gas prices in Europe are around $60 per million British thermal units. That compares to around $7 to $8 here in the United States
  • if the worst comes to pass, the market, as a mechanism, simply won’t work. The market will break. The prices will go too high. There’s just not enough energy for the market to balance at a certain price.
  • don’t forget, the amount of liquid natural gas that Europe is importing today — Asia is competing for those shipments. What happens if the Asia winter is very bad? What happens if China and others are willing to pay very high prices for it?
  • I think we’re in a multiyear potential energy crisis.
  • one thing that hasn’t gotten enough attention and that I worry most about is the impact this is having on emerging markets and the developing economies, because it is an interconnected market. When Europe is competing to buy L.N.G. at very high prices, not to mention Asia, that means if you’re in Pakistan or Bangladesh or lower-income countries, you’re really struggling to afford it. You’re just priced out of the market for natural gas — and coal. Coal is incredibly expensive now,
  • I think that that is a real potential humanitarian crisis, as a ripple effect of what’s happening in Europe right now.
  • right now, the price of gas in Europe is about four times what it was last year. Russia has cut flows to Europe by two-thirds but is earning the same revenue as it did last year. So Putin is not being hurt by the loss of gas exports to Europe. Europe’s being hurt by that.
  • this situation could last for several years.
  • Could the energy crisis bring about a change of heart, in which European countries withdraw some of their support or even begin to pressure Ukraine to negotiate a settlement? Is it possible that could even happen in advance of this winter?
  • you would imagine that, over time, when you don’t see Ukraine on the front page each and every day, eventually people’s attention wanes a bit and at a certain point the economic pain of high energy prices or other economic harms from the conflict reach a point where support may start to fracture a bit.
  • Whether that reaches a point where you start to see the West put pressure on Ukraine to capitulate, I think we’re pretty far away from that now, because everyone recognizes how outrageous and unacceptable Putin’s conduct is.
Javier E

Joe Biden Just Crushed China's Semiconductor Industry - 0 views

  • Making computer chips requires a lot of advanced equipment. Much of that advanced equipment is made by American companies. The new rules from the Biden administration make it so that any company, anywhere in the world, using certain advanced American equipment to make chips can’t sell those chips to Chinese-controlled companies.
  • at the stroke of a pen, China is getting cut off from the kind of advanced chips it can’t manufacture on its own. Which will cripple both military progress and tech-sector progress, too.
  • in case there was any question, it is clear that China is being viewed as an adversary, and that that view is a bipartisan one. Any tech company with business in China would do well to note that any further investments are fraught with risk, and previous investments need to be diversified sooner rather than later.
  • ...17 more annotations...
  • while Trump deserves credit for upsetting the apple cart in terms of conventional wisdom with regards to China relations, the Biden administration is correct to pursue those previous actions to their logical conclusion. . . .
  • it bans chips but it also bans equipment as well (and, given the restrictions it places on U.S.-persons, also bans the service of existing equipment).
  • That certainly increases the motivation for China to build alternatives, but it is tough to get strong legs when you have to first figure out how to make weights (but the weights involve the most complex tools ever invented by humans).
  • now the Chinese have to reinvent every wheel in the process just to get to par as it exists in the West circa 2022.
  • we talked about an accusation we sometimes hear:It’s nice that you right-wingers have come around since 2016, but the Republican party was always like this.
  • I argued that I don’t think this criticism is really right. Let’s pretend that you were a Republican in 2000 and you cared about:Robust foreign policyThe spread of democracy abroadThe rule of lawFree trade
  • Well, guess what: The Democratic party is now your natural home for those priorities. Sure, the Democrats also have some stuff you’re against, like political correctness and student loan forgiveness and expansion of the welfare state.
  • I hope you’ll watch this video clip. Because it’s not what Tuberville is saying so much as the crowd’s reaction to it. The guy is basically doing a Supreme Grand Wizard routine—all that’s missing is the n-word—and the crowd forking loves it.
  • at the same time, I understand—I think—what these critics mean. What they mean is:
  • Republican voters were always revanchists motivated not by high-minded intellectual arguments, but by simple animosities. Like racism.
  • And when you put it this way, I think the criticism is valid. For example:
  • The point is that the Republican party has changed along some very important, policy and ideological vectors. It really wasn’t always like this.
  • the actual Republican voters at this rally? They got crazy for it. They are into it.
  • Is there any way to read this except as an expression of cut-and-dried, out-and-proud, no dog whistle racism?
  • we can stipulate that the majority of Republican voters aren’t motivated in large part by racial animosity. I want to be as generous as possible so that Republicans reading this don’t think that they, personally, are being accused.
  • However small the minority of out-and-out racists in the Republican voting ranks might be, it’s much larger than people like me thought it was 20 years ago.
  • And any Republican/conservative who can’t come to grips with that today—who is still pretending that their coalition is motivated either by either high-minded political theory or benign tribalism—has to be trying (hard) not to see the truth.
Javier E

The Monk Who Thinks the World Is Ending - The Atlantic - 0 views

  • Seventy thousand years ago, a cognitive revolution allowed Homo sapiens to communicate in story—to construct narratives, to make art, to conceive of god.
  • Twenty-five hundred years ago, the Buddha lived, and some humans began to touch enlightenment, he says—to move beyond narrative, to break free from ignorance.
  • Three hundred years ago, the scientific and industrial revolutions ushered in the beginning of the “utter decimation of life on this planet.”
  • ...25 more annotations...
  • Humanity has “exponentially destroyed life on the same curve as we have exponentially increased intelligence,” he tells his congregants.
  • Now the “crazy suicide wizards” of Silicon Valley have ushered in another revolution. They have created artificial intelligence.
  • Forall provides spiritual advice to AI thinkers, and hosts talks and “awakening” retreats for researchers and developers, including employees of OpenAI, Google DeepMind, and Apple. Roughly 50 tech types have done retreats at MAPLE in the past few years
  • Humans are already destroying life on this planet. AI might soon destroy us.
  • His monastery is called MAPLE, which stands for the “Monastic Academy for the Preservation of Life on Earth.” The residents there meditate on their breath and on metta, or loving-kindness, an emanation of joy to all creatures.
  • They meditate in order to achieve inner clarity. And they meditate on AI and existential risk in general—life’s violent, early, and unnecessary end.
  • There is “no reason” to think AI will preserve humanity, “as if we’re really special,” Forall tells the residents, clad in dark, loose clothing, seated on zafu cushions on the wood floor. “There’s no reason to think we wouldn’t be treated like cattle in factory farms.”
  • His second is to influence technology by influencing technologists. His third is to change AI itself, seeing whether he and his fellow monks might be able to embed the enlightenment of the Buddha into the code.
  • In the past few years, MAPLE has become something of the house monastery for people worried about AI and existential risk.
  • Forall describes the project of creating an enlightened AI as perhaps “the most important act of all time.” Humans need to “build an AI that walks a spiritual path,” one that will persuade the other AI systems not to harm us
  • we should devote half of global economic output—$50 trillion, give or take—to “that one thing.” We need to build an “AI guru,” he said. An “AI god.”
  • Forall’s first goal is to expand the pool of humans following what Buddhists call the Noble Eightfold Path.
  • Forall and many MAPLE residents are what are often called, derisively if not inaccurately, “doomers.”
  • The seminal text in this ideological lineage is Nick Bostrom’s Superintelligence, which posits that AI could turn humans into gorillas, in a way. Our existence could depend not on our own choices but on the choices of a more intelligent other.
  • he is spending his life ruminating on AI’s risks, which he sees as far from banal. “We are watching humanist values, and therefore the political systems based on them, such as democracy, as well as the economic systems—they’re just falling apart,” he said. “The ultimate authority is moving from the human to the algorithm.”
  • Forall’s mother worked for humanitarian nonprofits and his father for conservation nonprofits; the household, which attended Quaker meetings, listened to a lot of NPR.)
  • He got his answer: Craving is the root of all suffering. And he became ordained, giving up the name Teal Scott and becoming Soryu Forall: “Soryu” meaning something like “a growing spiritual practice” and “Forall” meaning, of course, “for all.”
  • In 2013, he opened MAPLE, a “modern” monastery addressing the plagues of environmental destruction, lethal weapons systems, and AI, offering co-working and online courses as well as traditional monastic training.
  • His vision is dire and grand, but perhaps that is why it has found such a receptive audience among the folks building AI, many of whom conceive of their work in similarly epochal terms.
  • The nonprofit’s revenues have quadrupled, thanks in part to contributions from tech executives as well as organizations such as the Future of Life Institute, co-founded by Jaan Tallinn, a co-creator of Skype.
  • The donations have helped MAPLE open offshoots—Oak in the Bay Area, Willow in Canada—and plan more. (The highest-paid person at MAPLE is the property manager, who earns roughly $40,000 a year.)
  • The strictness of the place helps them let go of ego and see the world more clearly, residents told me. “To preserve all life: You can’t do that until you come to love all life, and that has to be trained,
  • Forall was absolute: Nine countries are armed with nuclear weapons. Even if we stop the catastrophe of climate change, we will have done so too late for thousands of species and billions of beings. Our democracy is fraying. Our trust in one another is fraying
  • Many of the very people creating AI believe it could be an existential threat: One 2022 survey asked AI researchers to estimate the probability that AI would cause “severe disempowerment” or human extinction; the median response was 10 percent. The destruction, Forall said, is already here.
  • “It’s important to know that we don’t know what’s going to happen,” he told me. “It’s also important to look at the evidence.” He said it was clear we were on an “accelerating curve,” in terms of an explosion of intelligence and a cataclysm of death. “I don’t think that these systems will care too much about benefiting people. I just can’t see why they would, in the same way that we don’t care about benefiting most animals. While it is a story in the future, I feel like the burden of proof isn’t on me.”
Javier E

Xi Jinping's Favorite Television Shows - The Bulwark - 0 views

  • After several decades of getting it “right,” why does China now seem to insist on getting it “wrong?”
  • a single-party system meets with widespread, almost universal, scorn in the United States and elsewhere. And so, from the Western point of view, because it lacks legitimacy it must be kept in power via nationalist cheerleading, government media control, and a massive repressive apparatus.
  • Print
  • ...19 more annotations...
  • What if a segment of the population actually supported, or at least tolerated, the CCP? And even if that segment involved both myth and fact, it behooves the CCP to keep the myth alive.
  • How does the CCP garner popular support in an information era? How does a dictatorship explain to its population that its unchallenged rule is wise, just, and socially beneficial?
  • All of this takes place against a backdrop of family and social developments in which we can explore household dynamics, dating habits, and professional aspirations—all within social norms for those honest party members and seemingly violated by those who are not so honest.
  • watch the television series Renmin de Mingyi (“In the Name of the People”), publicly available with English subtitles.
  • In the Name of the People is a primetime drama about a local prosecutor’s efforts to root out corruption in a modern-day, though fictional, Chinese city. Beyond the anti-corruption narrative, the series also goes into local CCP politics as some of the leaders are (you guessed it) corrupt and others are simply bureaucratic time-servers, guarding their own privileges and status without actually helping the people they purport to serve.
  • the series boasts one of Xi’s other main themes, “common prosperity,” a somewhat elastic term that usually means the benefits of prosperity should be shared throughout all segments of society.
  • The historical tools used to generate support such as mass rallies and large-scale hectoring no longer work with a more educated and communications-oriented citizenry.
  • the central themes are quite clear: The party has brought historical prosperity to the community and there are a few bad apples who are unfairly trying to benefit from this wealth. There are also various sluggards and mediocrities who have no capacity for improvement or sense of public responsibilities.
  • So we see government officials pondering if they can ever find a date (being the workaholics that they are), or discussing housework with their spouses, or sharing kitchen duties, or reviewing school work with their child.
  • The show makes clear that the vast majority of party members and government officials are dedicated souls who work to improve peoples’ lives. And in the end, virtue triumphs, the party triumphs, China triumphs, and most (not all) of the personal issues are resolved as well.
  • The show’s version of the CCP eagerly and uncynically supports Chinese culture: The same union leader from the wildcat strike also writes and publishes poetry. Calligraphy is as prized as specialty teas. And all of this is told in a lively style, similar to the Hollywood fare Americans might watch.
  • n the Name of the People was first broadcast in 2017 as a lead-up to the last Communist Party Congress, China’s most important decision-making gathering, held every five years. The show’s launch was a huge hit, achieving the highest broadcast ratings of any show in a decade.
  • Within a month, the first episode had been seen over 350 million times and just one of the streaming platforms, iQIYI, reported a total of 5.9 billion views for the show’s 55 episodes.
  • All of this must come as good news for the prosecutors featured so favorably in the series—for their real-life parent government body, the Supreme People’s Protectorate, commissioned and provided financing for the show.
  • At a minimum, these shows illustrate a stronger self-awareness in the CCP and considerable improvement in communication strategy.
  • Most important, it provides direction to current party members. Indeed, in some cities viewing was made obligatory and the basis for “study sessions” for party cadres
  • Second, the enormous public success of the series and acknowledging deficiencies of the party allows the party to control the criticism without ever addressing the fundamental question of whether a one-party system is intrinsically susceptible to corruption or poor performance.
  • As communication specialists like to say, There is already a conversation taking place about your brand—the only question is whether you will lead the conversation. The CCP is leading in its communications strategy and making it as easy as possible for Chinese citizens to support Xi.
  • it is not difficult to see that in this area, as in many others, China is breaking with tactics from the past and is playing its cards increasingly well. Whether the CCP can renew itself, reestablish that social contract, and live up to its television image is another question.
Javier E

Amazon Prime Day Is Dystopian - The Atlantic - 0 views

  • hen Prime was introduced, in 2005, Amazon was relatively small, and still known mostly for books. As the company’s former director of ordering, Vijay Ravindran, told Recode’s Jason Del Rey in 2019, Prime “was brilliant. It made Amazon the default.”
  • It created incentives for users to be loyal to Amazon, so they could recoup the cost of membership, then $79 for unlimited two-day shipping. It also enabled Amazon to better track the products they buy and, when video streaming was added as a perk in 2011, the shows they watch, in order to make more things that the data indicated people would want to buy and watch, and to surface the things they were most likely to buy and watch at the very top of the page.
  • And most important, Prime habituated consumers to a degree of convenience, speed, and selection that, while unheard-of just years before, was made standard virtually overnight.
  • ...26 more annotations...
  • “It is genius for the current consumer culture,” Christine Whelan, a clinical professor of consumer science at the University of Wisconsin at Madison, told me. “It encourages and then meets the need for the thing, so we then continue on the hedonic treadmill: Buy the latest thing we want and then have it delivered immediately and then buy the next latest thing.”
  • With traditional retail, “there’s the friction of having to go to the store, there’s the friction of will the store have it, there’s the friction of carrying it,” Whelan said. “There’s the friction of having to admit to another human being that you’re buying it. And when you remove the friction, you also remove a lot of individual self-control. The more you are in the ecosystem and the easier it is to make a purchase, the easier it is to say yes to your desire rather than no.”
  • “It used to be that being a consumer was all about choice,”
  • But now, “two-thirds of people start their product searches on Amazon.
  • Prime discourages comparison shopping—looking around is pointless when everything you need is right here—even as Amazon’s sheer breadth of products makes shoppers feel as if they have agency.
  • “Consumerism has become a key way that people have misidentified freedom,”
  • what Amazon represents is a corporate infrastructure that is increasingly directed at getting as many consumers as possible locked into a consumerist process—an Amazon consumer for life.”
  • Amazon offers steep discounts to college students and new parents, two groups that are highly likely to change their buying behavior. It keeps adding more discounts and goodies to the Prime bundle, making subscribing ever more appealing. And, in an especially sinister move, it makes quitting Prime maddeningly difficult.
  • the United States now has more Prime memberships than households. In 2020,
  • In 2019, Amazon shaved a full day off its delivery time, making one-day shipping the default, and also making Prime an even more tantalizing proposition: Why hop in the car for anything at all when you could get it delivered tomorrow, for free?
  • As subscription numbers grew through the 2010s, the revenue from them helped Amazon pump more money into building fulfillment centers (to get products to people even faster), acquiring new businesses (to control even more of the global economy), and adding more perks to the bundle (to encourage more people to sign up)
  • “Every decision we make is based upon the fact that Amazon can get these books cheaper and faster. The prevailing expectation is you can get anything online shipped for”— he scrunched his fingers into air quotes—“‘free,’ in one or two days. And there’s really only one company that can do that. They do that because they’re willing to push and exploit their workers.”
  • Thanks in large part to the revenue from Prime subscriptions and from the things subscribers buy, Amazon’s value has multiplied roughly 97 times, to $1.76 trillion, since the service was introduced. Amazon is the second-largest private employer in the United States, after Walmart, and it is responsible for roughly 40 percent of all e-commerce in the United States.
  • It controls hundreds of millions of square feet across the country and is opening more fulfillment centers all the time. It has acquired dozens of other companies, most recently the film studio MGM for $8.5 billion. Its cloud-computing operation, Amazon Web Services, is the largest of its kind and provides the plumbing for a vast swath of the internet, to a profit of $13.5 billion last year.
  • Amazon has entered some 40 million American homes in the form of the Alexa smart speaker, and some 150 million American pockets in the form of the Amazon app
  • “Amazon is a beast we’ve never seen before,” Alimahomed-Wilson told me. “Amazon powers our Zoom calls. It contracts with ICE. It’s in our neighborhoods. This is a very different thing than just being a large retailer, like Walmart or the Ford Motor Company.”
  • I find it useful to compare Big Tech to climate change, another force that is altering the destiny of everyone on Earth, forever. Both present themselves to us all the time in small ways—a creepy ad here, an uncommonly warm November there—but are so big, so abstract, so everywhere that they’re impossible for any one person to really understand
  • Both are the result of a decades-long, very human addiction to consumption and convenience that has been made grotesque and extreme by the incentives and mechanisms of the internet, market consolidation, and economic stratification
  • Both have primarily been advanced by a small handful of very big companies that are invested in making their machinations unseeable to the naked eye.
  • Speed and convenience aren’t actually free; they never are. Free shipping isn’t free either. It just obscures the real price.
  • Next-day shipping comes with tremendous costs: for labor and logistics and transportation and storage; for the people who pack your stuff into those smiling boxes and for the people who deliver them; for the planes and trucks and vans that carry them; for the warehouses that store them; for the software ensuring that everything really does get to your door on time, for air-conditioning and gas and cardboard and steel. Amazon—Prime in particular—has done a superlative job of making all those costs, all those moving parts, all those externalities invisible to the consumer.
  • The pandemic drove up demand for Amazon, and for labor: Last year, company profits shot up 70 percent, Bezos’s personal wealth grew by $70 billion, and 1,400 people a day joined the company’s workforce.
  • Amazon is so big that every sector of our economy has bent to respond to the new way of consuming that it invented. Prime isn’t just bad for Amazon’s workers—it’s bad for Target’s, and Walmart’s. It’s bad for the people behind the counter at your neighborhood hardware store and bookstore, if your neighborhood still has a hardware store and a bookstore. Amazon has accustomed shoppers to a pace and manner of buying that depends on a miracle of precision logistics even when it’s managed by one of the biggest companies on Earth. For the smaller guys, it’s downright impossible.
  • Amazon’s revenue from subscriptions alone—mostly Prime—was $25.2 billion, which is a 31 percent increase from the previous year
  • Just as abstaining from flying for moral reasons won’t stop sea-level rise, one person canceling Prime won’t do much of anything to a multinational corporation’s bottom line. “It’s statistically insignificant to Amazon. They’ll never feel it,” Caine told me. But, he said, “the small businesses in your neighborhood will absolutely feel the addition of a new customer. Individual choices do make a big difference to them.”
  • Whelan teaches a class at UW called Consuming Happiness, and she is fond of giving her students the adage that you can buy happiness—“if you spend your money in keeping with your values: spending prosocially, on experiences. Tons of research shows us this.”
Javier E

Opinion | The Secret of America's Economic Success - The New York Times - 0 views

  • there was widespread concern that the pandemic would leave lasting economic scars. After all, the 2008 financial crisis was followed by a weak recovery that left real gross domestic product in many countries far below the pre-crisis trend even a decade later. Indeed, as we approach Covid’s four-year mark, many of the world’s economies remain well short of full recovery.
  • But not the United States. Not only have we had the strongest recovery in the advanced world, but the International Monetary Fund’s latest World Economic Outlook also points out that American growth since 2019 has actually exceeded pre-Covid projections.
  • let’s take a moment to celebrate this good economic news — and try to figure out what went right with the U.S. economy.
  • ...10 more annotations...
  • Part of the answer, to be fair, is luck. Russia’s invasion of Ukraine caused a major energy shock in Europe, which had come to rely on imports of Russian natural gas. America, which exports gas, was much less affected.
  • What about inflation? When you use comparable measures, America also has the lowest inflation rate among major economies.
  • It’s true that one recent poll found that a majority of Americans and 60 percent of Republicans say that unemployment is near a 50-year high. But it’s actually near its lowest level since the 1960s.
  • A second, probably more important factor was that the United States pursued aggressively expansionary fiscal policy
  • Many economists were extremely critical, warning that this spending would fuel inflation, which it probably did for a while. But inflation has subsided, while “Big Fiscal” helped the economy get to full employment — arguably the first time we’ve had truly full employment in decades.
  • A strong job market may in turn have had major long-term benefits, by drawing previously marginalized Americans into the work force.
  • the percentage of U.S. adults in their prime working years participating in the labor force is now at its highest level in 20 years. One number I find especially striking is labor force participation by Americans with a disability, which has soared.
  • One last thing: When Covid struck, all advanced countries took strong measures to limit economic hardship, but they took different approaches. European governments generally paid employers to keep workers on their payrolls, even if they were temporarily idle. America, for the most part, let layoffs happen but protected workers with expanded unemployment benefits.
  • There was a case for each approach. Europe’s approach helped keep workers connected to their old jobs; the U.S. approach created more flexibility, making it easier for workers to move to different jobs if the post-Covid economy turned out to look quite different from the economy before the pandemic.
  • is clear: We have been remarkably successful, even if nobody will believe it.
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 0 views

  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • ...46 more annotations...
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • I met with Kahneman
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Sam Altman's ouster at OpenAI exposes growing rift in AI industry - The Washington Post - 0 views

  • Quora CEO Adam D’Angelo, one of OpenAI’s independent board members, told Forbes in January that there was “no outcome where this organization is one of the big five technology companies.”
  • “My hope is that we can do a lot more good for the world than just become another corporation that gets that big,” D’Angelo said in the interview. He did not respond to requests for comment.
  • Two of the board members who voted Altman out worked for think tanks backed by Open Philanthropy, a tech billionaire-backed foundation that supports projects preventing AI from causing catastrophic risk to humanity
  • ...7 more annotations...
  • Helen Toner, the director of strategy and foundational research grants for Center for Security and Emerging Technology at Georgetown, and Tasha McCauley, whose LinkedIn profile says she began work as an adjunct senior management scientist at Rand Corporation earlier this year. Toner has previously spoken at conferences for a philanthropic movement closely tied to AI safety. McCauley is also involved in the work.
  • Sutskever helped create AI software at the University of Toronto, called AlexNet, which classified objects in photographs with more accuracy than any previous software had achieved, laying much of the foundation for the field of computer vision and deep learning.
  • He recently shared a radically different vision for how AI might evolve in the near term. Within five to 10 years, there could be “data centers that are much smarter than people,” Sutskever said on a recent episode of the AI podcast “No Priors.” Not just in terms of memory or knowledge, but with a deeper insight and ability to learn faster than humans.
  • At the bare minimum, Sutskever added, it’s important to work on controlling superintelligence today. “Imprinting onto them a strong desire to be nice and kind to people — because those data centers,” he said, “they will be really quite powerful.”
  • OpenAI has a unique governing structure, which it adopted in 2019. It created a for-profit subsidiary that allowed investors a return on the money they invested into OpenAI, but capped how much they could get back, with the rest flowing back into the company’s nonprofit. The company’s structure also allows OpenAI’s nonprofit board to govern the activities of the for-profit entity, including the power to fire its chief executive.
  • As news of the circumstances around Altman’s ouster began to come out, Silicon Valley circles have turned to anger at OpenAI’s board.
  • “What happened at OpenAI today is a board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs,” Ron Conway, a longtime venture capitalist who was one of the attendees at OpenAI’s developer conference, said on X. “It is shocking, it is irresponsible, and it does not do right by Sam and Greg or all the builders in OpenAI.”
Javier E

BOOM: Google Loses Antitrust Case - BIG by Matt Stoller - 0 views

  • It’s a long and winding road for Epic. The firm lost the Apple case, which is on appeal, but got the Google case to a jury, along with several other plaintiffs. Nearly every other firm challenging Google gradually dropped out of the case, getting special deals from the search giant in return for abandoning their claims. But Sweeney was righteous, and believed that Google helped ruined the internet. He didn’t ask for money or a special deal, instead seeking to have Judge James Donato force Google to make good on its “broken promise,” which he characterized as “an open, competitive Android ecosystem for all users and industry participants.”
  • Specifically, Sweeney asked for the right for firms to have their own app stores, and the ability to use their own billing systems. Basically, he wants to crush Google’s control over the Android phone system. And I suspect he just did. You can read the verdict here.
  • Google is likely to be in trouble now, because it is facing multiple antitrust cases, and these kinds of decisions have a bandwagon effect. The precedent is set, in every case going forward the firm will now be seen as presumed guilty, since a jury found Google has violated antitrust laws. Judges are cautious, and are generally afraid of being the first to make a precedent-setting decision. Now they won’t have to. In fact, judges and juries will now have to find a reason to rule for Google. If, say, Judge Amit Mehta in D.C., facing a very similar fact-pattern, chooses to let Google off the hook, well, he’ll look pretty bad.
  • ...4 more annotations...
  • There are a few important take-aways. First, this one didn’t come from the government, it was a private case by a video game maker that sued Google over its terms for getting access to the Google Play app store for Android, decided not by a fancy judge with an Ivy League degree but by a jury of ordinary people in San Francisco. In other words, private litigation, the ‘ambulance-chasing’ lawyers, are vital parts of our justice system.
  • Second, juries matter, even if they are riskier for everyone involved. It’s kind of like a mini poll, and the culture is ahead of the cautious legal profession. This quick decision is a sharp contrast with the 6-month delay to an opinion in the search case that Judge Mehta sought in the D.C. trial.
  • Third, tying claims, which is a specific antitrust violation, are good law. Tying means forcing someone to buy an unrelated product in order to access the actual product they want to buy. The specific legal claim here was about how Google forced firms relying on its Google Play app store to also use its Google Play billing service, which charges an inflated price of 30% of the price of an app. Tying is pervasive throughout the economy, so you can expect more suits along these lines.
  • And finally, big tech is not above the law. This loss isn’t just the first antitrust failure for Google, it’s the first antitrust loss for any big tech firm. I hear a lot from skeptics that the fix is in, that the powerful will always win, that justice in our system is a mirage. But that just isn’t true. A jury of our peers just made that clear.
Javier E

Pro-China YouTube Network Used A.I. to Malign U.S., Report Finds - The New York Times - 0 views

  • The 10-minute post was one of more than 4,500 videos in an unusually large network of YouTube channels spreading pro-China and anti-U.S. narratives, according to a report this week from the Australian Strategic Policy Institute
  • ome of the videos used artificially generated avatars or voice-overs, making the campaign the first influence operation known to the institute to pair A.I. voices with video essays.
  • The campaign’s goal, according to the report, was clear: to influence global opinion in favor of China and against the United States.
  • ...17 more annotations...
  • The videos promoted narratives that Chinese technology was superior to America’s, that the United States was doomed to economic collapse, and that China and Russia were responsible geopolitical players. Some of the clips fawned over Chinese companies like Huawei and denigrated American companies like Apple.
  • Content from at least 30 channels in the network drew nearly 120 million views and 730,000 subscribers since last year, along with occasional ads from Western companies
  • Disinformation — such as the false claim that some Southeast Asian nations had adopted the Chinese yuan as their own currency — was common. The videos were often able to quickly react to current events
  • he coordinated campaign might be “one of the most successful influence operations related to China ever witnessed on social media.”
  • Historically, its influence operations have focused on defending the Communist Party government and its policies on issues like the persecution of Uyghurs or the fate of Taiwan
  • Efforts to push pro-China messaging have proliferated in recent years, but have featured largely low-quality content that attracted limited engagement or failed to sustain meaningful audiences
  • “This campaign actually leverages artificial intelligence, which gives it the ability to create persuasive threat content at scale at a very limited cost compared to previous campaigns we’ve seen,”
  • YouTube said in a statement that its teams work around the clock to protect its community, adding that “we have invested heavily in robust systems to proactively detect coordinated influence operations.” The company said it welcomed research efforts and that it had shut down several of the channels mentioned in the report for violating the platform’s policies.
  • China began targeting the United States more directly amid the mass pro-democracy protests in Hong Kong in 2019 and continuing with the Covid-19 pandemic, echoing longstanding Russian efforts to discredit American leadership and influence at home and aboard.
  • Over the summer, researchers at Microsoft and other companies unearthed evidence of inauthentic accounts that China employed to falsely accuse the United States of using energy weapons to ignite the deadly wildfires in Hawaii in August.
  • Meta announced last month that it removed 4,789 Facebook accounts from China that were impersonating Americans to debate political issues, warning that the campaign appeared to be laying the groundwork for interference in the 2024 presidential elections.
  • It was the fifth network with ties to China that Meta had detected this year, the most of any other country.
  • The advent of artificial technology seems to have drawn special interest from Beijing. Ms. Keast of the Australian institute said that disinformation peddlers were increasingly using easily accessible video editing and A.I. programs to create large volumes of convincing content.
  • She said that the network of pro-China YouTube channels most likely fed English-language scripts into readily available online text-to-video software or other programs that require no technical expertise and can produce clips within minutes. Such programs often allow users to select A.I.-generated voice narration and customize the gender, accent and tone of voice.
  • In 39 of the videos, Ms. Keast found at least 10 artificially generated avatars advertised by a British A.I. company
  • she also discovered what may be the first example in an influence operation of a digital avatar created by a Chinese company — a woman in a red dress named Yanni.
  • The scale of the pro-China network is probably even larger, according to the report. Similar channels appeared to target Indonesian and French people. Three separate channels posted videos about chip production that used similar thumbnail images and the same title translated into English, French and Spanish.
Javier E

Excuse me, but the industries AI is disrupting are not lucrative - 0 views

  • Google’s Gemini. The demo video earlier this week was nothing short of amazing, as Gemini appeared to fluidly interact with a questioner going through various tasks and drawings, always giving succinct and correct answers.
  • another huge new AI model revealed.
  • that’s. . . not what’s going on. Rather, they pre-recorded it and sent individual frames of the video to Gemini to respond to, as well as more informative prompts than shown, in addition to editing the replies from Gemini to be shorter and thus, presumably, more relevant. Factor all that in, Gemini doesn’t look that different from GPT-4,
  • ...24 more annotations...
  • Continued hype is necessary for the industry, because so much money flowing in essentially allows the big players, like OpenAI, to operate free of economic worry and considerations
  • The money involved is staggering—Anthropic announced they would compete with OpenAI and raised 2 billion dollars to train their next-gen model, a European counterpart just raised 500 million, etc. Venture capitalists are eager to throw as much money as humanely possible into AI, as it looks so revolutionary, so manifesto-worthy, so lucrative.
  • While I have no idea what the downloads are going to be for the GPT Store next year, my suspicion is it does not live up to the hyped Apple-esque expectation.
  • given their test scores, I’m willing to say GPT-4 or Gemini is smarter along many dimensions than a lot of actual humans, at least in the breadth of their abstract knowledge—all while noting even leading models still have around a 3% hallucination rate, which stacks up in a complex task.
  • A more interesting “bear case” for AI is that, if you look at the list of industries that leading AIs like GPT-4 are capable of disrupting—and therefore making money off of—the list is lackluster from a return-on-investment perspective, because the industries themselves are not very lucrative.
  • What are AIs of the GPT-4 generation best at? It’s things like:writing essays or short fictionsdigital artchattingprogramming assistance
  • While I personally wouldn’t go so far as to describe current LLMs as “a solution in search of a problem” like cryptocurrency has famously been described as, I do think the description rings true in an overall economic/business sense so fa
  • The issue is that taking the job of a human illustrator just. . . doesn’t make you much money. Because human illustrators don’t make much money
  • While you can easily use Dall-E to make art for a blog, or a comic book, or a fantasy portrait to play an RPG, the market for those things is vanishingly small, almost nonexistent
  • As of this writing, the compute cost to create an image using a large image model is roughly $.001 and it takes around 1 second. Doing a similar task with a designer or a photographer would cost hundreds of dollars (minimum) and many hours or days (accounting for work time, as well as schedules). Even if, for simplicity’s sake, we underestimate the cost to be $100 and the time to be 1 hour, generative AI is 100,000 times cheaper and 3,600 times faster than the human alternative.
  • Like, wow, an AI that can write a Reddit comment! Well, there are millions of Reddit comments, which is precisely why we now have AIs good at writing them. Wow, an AI that can generate music! Well, there are millions of songs, which is precisely why we now have AIs good at creating them.
  • Search is the most obvious large market for AI companies, but Bing has had effectively GPT-4-level AI on offer now for almost a year, and there’s been no huge steal from Google’s market share.
  • What about programming? It’s actually a great expression of the issue, because AI isn’t replacing programming—it’s replacing Stack Overflow, a programming advice website (after all, you can’t just hire GPT-4 to code something for you, you have to hire a programmer who uses GPT-4
  • Even if OpenAI drove Stack Overflow out of business entirely and cornered the market on “helping with programming” they would gain, what? Stack Overflow is worth about 1.8 billion, according to its last sale in 2022. OpenAI already dwarfs it in valuation by an order of magnitude.
  • The more one thinks about this, one notices a tension in the very pitch itself: don’t worry, AI isn’t going to take all our jobs, just make us better at them, but at the same time, the upside of AI as an industry is the total combined worth of the industries its replacing, er, disrupting, and this justifies the massive investments and endless economic optimism.
  • It makes me worried about the worst of all possible worlds: generative AI manages to pollute the internet with cheap synthetic data, manages to make being a human artist / creator harder, manages to provide the basis of agential AIs that still pose some sort of existential risk if they get intelligent enough—all without ushering in some massive GDP boost that takes us into utopia
  • If the AI industry ever goes through an economic bust sometime in the next decade I think it’ll be because there are fewer ways than first thought to squeeze substantial profits out of tasks that are relatively commonplace already
  • We can just look around for equivalencies. The payment for humans working as “mechanical turks” on Amazon are shockingly low. If a human pretending to be an AI (which is essentially what a mechanical turk worker is doing) only makes a buck an hour, how much will an AI make doing the same thing?
  • , is it just a quirk of the current state of technology, or something more general?
  • What’s written on the internet is a huge “high quality” training set (at least in that it is all legible and collectable and easy to parse) so AIs are very good at writing the kind of things you read on the internet
  • But data with a high supply usually means its production is easy or commonplace, which, ceteris paribus, means it’s cheap to sell in turn. The result is a highly-intelligent AI merely adding to an already-massive supply of the stuff it’s trained on.
  • Was there really a great crying need for new ways to cheat on academic essays? Probably not. Will chatting with the History Buff AI app (it was is in the background of Sam Altman’s presentation) be significantly different than chatting with posters on /r/history on Reddit? Probably not
  • Call it the supply paradox of AI: the easier it is to train an AI to do something, the less economically valuable that thing is. After all, the huge supply of the thing is how the AI got so good in the first place.
  • AI might end up incredibly smart, but mostly at things that aren’t economically valuable.
Javier E

The Reason Putin Would Risk War - The Atlantic - 0 views

  • Putin is preparing to invade Ukraine again—or pretending he will invade Ukraine again—for the same reason. He wants to destabilize Ukraine, frighten Ukraine. He wants Ukrainian democracy to fail. He wants the Ukrainian economy to collapse. He wants foreign investors to flee. He wants his neighbors—in Belarus, Kazakhstan, even Poland and Hungary—to doubt whether democracy will ever be viable, in the longer term, in their countries too.
  • Farther abroad, he wants to put so much strain on Western and democratic institutions, especially the European Union and NATO, that they break up.
  • Putin will also fail, but he too can do a lot of damage while trying. And not only in Ukraine.
  • ...19 more annotations...
  • He wants to undermine America, to shrink American influence, to remove the power of the democracy rhetoric that so many people in his part of the world still associate with America. He wants America itself to fail.
  • of all the questions that repeatedly arise about a possible Russian invasion of Ukraine, the one that gets the least satisfactory answers is this one: Why?
  • Why would Russia’s president, Vladimir Putin, attack a neighboring country that has not provoked him? Why would he risk the blood of his own soldiers?
  • To explain why requires some history
  • the most significant influence on Putin’s worldview has nothing to do with either his KGB training or his desire to rebuild the U.S.S.R. Putin and the people around him have been far more profoundly shaped, rather, by their path to power.
  • Putin missed that moment of exhilaration. Instead, he was posted to the KGB office in Dresden, East Germany, where he endured the fall of the Berlin Wall in 1989 as a personal tragedy.
  • Putin, like his role model Yuri Andropov, who was the Soviet ambassador to Hungary during the 1956 revolution there, concluded from that period that spontaneity is dangerous. Protest is dangerous. Talk of democracy and political change is dangerous. To keep them from spreading, Russia’s rulers must maintain careful control over the life of the nation. Markets cannot be genuinely open; elections cannot be unpredictable; dissent must be carefully “managed” through legal pressure, public propaganda, and, if necessary, targeted violence.
  • Eventually Putin wound up as the top billionaire among all the other billionaires—or at least the one who controls the secret police.
  • Try to imagine an American president who controlled not only the executive branch—including the FBI, CIA, and NSA—but also Congress and the judiciary; The New York Times, The Wall Street Journal, The Dallas Morning News, and all of the other newspapers; and all major businesses, including Exxon, Apple, Google, and General Motors.
  • He is strong, of course, because he controls so many levers of Russia’s society and economy
  • And yet at the same time, Putin’s position is extremely precarious. Despite all of that power and all of that money, despite total control over the information space and total domination of the political space, Putin must know, at some level, that he is an illegitimate leader
  • He knows that this system works very well for a few rich people, but very badly for everyone else. He knows, in other words, that one day, prodemocracy activists of the kind he saw in Dresden might come for him too.
  • In his mind, in other words, he wasn’t merely fighting Russian demonstrators; he was fighting the world’s democracies, in league with enemies of the state.
  • All of which is a roundabout way of explaining the extraordinary significance, to Putin, of Ukraine.
  • Of course Ukraine matters as a symbol of the lost Soviet empire. Ukraine was the second-most-populous and second-richest Soviet republic, and the one with the deepest cultural links to Russia.
  • modern, post-Soviet Ukraine also matters because it has tried—struggled, really—to join the world of prosperous Western democracies. Ukraine has staged not one but two prodemocracy, anti-oligarchy, anti-corruption revolutions in the past two decades. The most recent, in 2014, was particularly terrifying for the Kremlin
  • Putin’s subsequent invasion of Crimea punished Ukrainians for trying to escape from the kleptocratic system that he wanted them to live in—and it showed Putin’s own subjects that they too would pay a high cost for democratic revolution.
  • they are all a part of the same story: They are the ideological answer to the trauma that Putin and his generation of KGB officers experienced in 1989. Instead of democracy, they promote autocracy; instead of unity, they try constantly to create division; instead of open societies, they promote xenophobia. Instead of letting people hope for something better, they promote nihilism and cynicism.
  • from the Donbas to France or the Netherlands, where far-right politicians hang around the European Parliament and take Russian money to go on “fact-finding missions” to Crimea. It’s a longer way still to the small American towns where, back in 2016, voters eagerly clicked on pro-Trump Facebook posts written in St. Petersburg
Javier E

Sam Altman, the ChatGPT King, Is Pretty Sure It's All Going to Be OK - The New York Times - 0 views

  • He believed A.G.I. would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.
  • “I try to be upfront,” he said. “Am I doing something good? Or really bad?”
  • In 2023, people are beginning to wonder if Sam Altman was more prescient than they realized.
  • ...44 more annotations...
  • And yet, when people act as if Mr. Altman has nearly realized his long-held vision, he pushes back.
  • This past week, more than a thousand A.I. experts and tech leaders called on OpenAI and other companies to pause their work on systems like ChatGPT, saying they present “profound risks to society and humanity.”
  • As people realize that this technology is also a way of spreading falsehoods or even persuading people to do things they should not do, some critics are accusing Mr. Altman of reckless behavior.
  • “The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” he told me on a recent afternoon. There is time, he said, to better understand how these systems will ultimately change the world.
  • Many industry leaders, A.I. researchers and pundits see ChatGPT as a fundamental technological shift, as significant as the creation of the web browser or the iPhone. But few can agree on the future of this technology.
  • Some believe it will deliver a utopia where everyone has all the time and money ever needed. Others believe it could destroy humanity. Still others spend much of their time arguing that the technology is never as powerful as everyone says it is, insisting that neither nirvana nor doomsday is as close as it might seem.
  • he is often criticized from all directions. But those closest to him believe this is as it should be. “If you’re equally upsetting both extreme sides, then you’re doing something right,” said OpenAI’s president, Greg Brockman.
  • To spend time with Mr. Altman is to understand that Silicon Valley will push this technology forward even though it is not quite sure what the implications will be
  • in 2019, he paraphrased Robert Oppenheimer, the leader of the Manhattan Project, who believed the atomic bomb was an inevitability of scientific progress. “Technology happens because it is possible,” he said
  • His life has been a fairly steady climb toward greater prosperity and wealth, driven by an effective set of personal skills — not to mention some luck. It makes sense that he believes that the good thing will happen rather than the bad.
  • He said his company was building technology that would “solve some of our most pressing problems, really increase the standard of life and also figure out much better uses for human will and creativity.”
  • He was not exactly sure what problems it will solve, but he argued that ChatGPT showed the first signs of what is possible. Then, with his next breath, he worried that the same technology could cause serious harm if it wound up in the hands of some authoritarian government.
  • Kelly Sims, a partner with the venture capital firm Thrive Capital who worked with Mr. Altman as a board adviser to OpenAI, said it was like he was constantly arguing with himself.
  • “In a single conversation,” she said, “he is both sides of the debate club.”
  • He takes pride in recognizing when a technology is about to reach exponential growth — and then riding that curve into the future.
  • he is also the product of a strange, sprawling online community that began to worry, around the same time Mr. Altman came to the Valley, that artificial intelligence would one day destroy the world. Called rationalists or effective altruists, members of this movement were instrumental in the creation of OpenAI.
  • Does it make sense to ride that curve if it could end in diaster? Mr. Altman is certainly determined to see how it all plays out.
  • “Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does. The other is that he likes power.”
  • “He has a natural ability to talk people into things,” Mr. Graham said. “If it isn’t inborn, it was at least fully developed before he was 20. I first met Sam when he was 19, and I remember thinking at the time: ‘So this is what Bill Gates must have been like.
  • poker taught Mr. Altman how to read people and evaluate risk.
  • It showed him “how to notice patterns in people over time, how to make decisions with very imperfect information, how to decide when it was worth pain, in a sense, to get more information,” he told me while strolling across his ranch in Napa. “It’s a great game.”
  • He believed, according to his younger brother Max, that he was one of the few people who could meaningfully change the world through A.I. research, as opposed to the many people who could do so through politics.
  • In 2019, just as OpenAI’s research was taking off, Mr. Altman grabbed the reins, stepping down as president of Y Combinator to concentrate on a company with fewer than 100 employees that was unsure how it would pay its bills.
  • Within a year, he had transformed OpenAI into a nonprofit with a for-profit arm. That way he could pursue the money it would need to build a machine that could do anything the human brain could do.
  • Mr. Brockman, OpenAI’s president, said Mr. Altman’s talent lies in understanding what people want. “He really tries to find the thing that matters most to a person — and then figure out how to give it to them,” Mr. Brockman told me. “That is the algorithm he uses over and over.”
  • Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, another lab intent on building artificial general intelligence.
  • “These are people who have left an indelible mark on the fabric of the tech industry and maybe the fabric of the world,” he said. “I think Sam is going to be one of those people.”
  • The trouble is, unlike the days when Apple, Microsoft and Meta were getting started, people are well aware of how technology can transform the world — and how dangerous it can be.
  • Mr. Scott of Microsoft believes that Mr. Altman will ultimately be discussed in the same breath as Steve Jobs, Bill Gates and Mark Zuckerberg.
  • The woman was the Canadian singer Grimes, Mr. Musk’s former partner, and the hat guy was Eliezer Yudkowsky, a self-described A.I. researcher who believes, perhaps more than anyone, that artificial intelligence could one day destroy humanity.
  • The selfie — snapped by Mr. Altman at a party his company was hosting — shows how close he is to this way of thinking. But he has his own views on the dangers of artificial intelligence.
  • In March, Mr. Altman tweeted out a selfie, bathed by a pale orange flash, that showed him smiling between a blond woman giving a peace sign and a bearded guy wearing a fedora.
  • He also helped spawn the vast online community of rationalists and effective altruists who are convinced that A.I. is an existential risk. This surprisingly influential group is represented by researchers inside many of the top A.I. labs, including OpenAI.
  • They don’t see this as hypocrisy: Many of them believe that because they understand the dangers clearer than anyone else, they are in the best position to build this technology.
  • Mr. Altman believes that effective altruists have played an important role in the rise of artificial intelligence, alerting the industry to the dangers. He also believes they exaggerate these dangers.
  • As OpenAI developed ChatGPT, many others, including Google and Meta, were building similar technology. But it was Mr. Altman and OpenAI that chose to share the technology with the world.
  • Many in the field have criticized the decision, arguing that this set off a race to release technology that gets things wrong, makes things up and could soon be used to rapidly spread disinformation.
  • Mr. Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them.
  • He told me that it would be a “very slow takeoff.”
  • When I asked Mr. Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.
  • If he’s wrong, he thinks he can make it up to humanity.
  • His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.
  • If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.
  • But as he once told me: “I feel like the A.G.I. can help with that.”
Javier E

The End of the Silicon Valley Myth - The Atlantic - 0 views

  • These companies, launched with promises to connect the world, to think different, to make information free to all, to democratize technology, have spent much of the past decade making the sorts of moves that large corporations trying to grow ever larger have historically made—embracing profit over safety, market expansion over product integrity, and rent seeking over innovation—but at much greater scale, speed, and impact. Now, ruled by monopolies, marred by toxicity, and overly reliant on precarious labor, Silicon Valley looks like it’s finally run hard up into its limits.
  • They’re failing utterly to create the futures they’ve long advertised, or even to maintain the versions they were able to muster. Having scaled to immense size, they’re unable or unwilling to manage the digital communities they’ve built
  • They’re paralyzed when it comes to product development and reduced to monopolistic practices such as charging rents and copying or buying up smaller competitors
  • ...10 more annotations...
  • Their policies tend to please no one; it’s a common refrain that antipathy toward Big Tech companies is one of the few truly bipartisan issues
  • You can just feel it, the cumulative weight of this stagnation, in the tech that most of us encounter every day. The act of scrolling past the same dumb ad to peer at the same bad news on the same glass screen on the same social network: This is the stuck future. There is a sense that we have reached the end of the internet, and no one wants to be left holding the bag
  • There’s a palpable exhaustion with the whole enterprise, with the men who set out to build the future or at least get rich, and who accomplished only one and a half of those things.
  • The big social networks are stuck. And there is little profit incentive to get them unstuck. That, after all, would require investing heavily in content moderators, empowering trust and safety teams, and penalizing malicious viral content that brings in huge traffic.
  • It’s not just social media that’s in decline, already over, or worse.
  • As its mighty iPhone sales figures have plateaued and its business has grown more conservative—it hasn’t released a culturally significant new product line since 2016’s AirPods—Apple has begun to embrace advertising.
  • as Google has consolidated its monopoly, the quality of its flagship search product has gotten worse. Result pages are cluttered with ads that must be scrolled through in order to find the “‘organic”’ items, and there’s reason to think the quality of the results has gotten worse over time as well.
  • YouTube, meanwhile, is facing many of the same policy quagmires as Facebook and Twitter, especially when it comes to content moderation—and similarly failing to meaningfully address them.
  • What a grim outcome for the internet, where the possibilities were once believed to be endless and where users were promised an infinite spectrum of possibility to indulge their creativity, build robust communities, and find their best expression, even when they could not do so in the real world
  • Big Tech, of course, never predicated its business models on enabling any of that, though its advertising and sloganeering may have suggested otherwise. Rather, companies’ ambitions were always focused on being the biggest: having the most users, selling the most devices, locking the most people into their walled gardens and ecosystems. The stuckness we’re seeing is the result of some of the most ambitious companies of our generation succeeding wildly yet having no vision beyond scale—no serious interest in engaging the civic and social dimensions of their projects.
Javier E

Facebook's hardware ambitions are undercut by its anti-China strategy - The Washington ... - 0 views

  • For more than a year, Meta CEO Mark Zuckerberg has made a point of stoking fears about China. He’s told U.S. lawmakers that China “steals” American technology and played up nationalist concerns about threats from Chinese-owned rival TikTok.
  • Meta has a growing problem: The social media service wants to transform itself into a powerhouse in hardware, and it makes virtually all of it in China.So the company is racing to get out.
  • Facebook has hit walls, say three people familiar with the discussions, who spoke on the condition of anonymity to describe internal conversations.
  • ...7 more annotations...
  • Until recently, the people said, Meta executives viewed the company’s reliance on China to make Oculus virtual reality headsets as a relatively minor concern because the company’s core focus was its social media and messaging apps.
  • All that has changed now that Meta has rebranded itself as a hardware company
  • “Meta is building a complicated hardware product. You can’t just turn on a dime and make it elsewhere,”
  • Facebook’s public criticism of China began in 2019 when Zuckerberg warned, in a speech at Georgetown University, that China was exporting a dangerous vision for the internet to the rest of the world — and noted that Facebook was abandoning its efforts to break into that country’s market.
  • The anti-China stance has since extended into a full-blown corporate strategy. Nick Clegg, the company’s president, wrote an op-ed attacking China in The Washington Post in 2020, the same year Zuckerberg attacked China in a congressional antitrust hearing.
  • At the antitrust hearing in Congress in 2020, Zuckerberg used his opening remarks to attack China in terms that went much further than his industry peers. He said it was “well-documented that the Chinese government steals technology from American companies,” and repeated that the country was “building its own version of the internet” that went against American values. He described Facebook as a “proudly American” company and noted that TikTok was the company’s fastest-growing rival.
  • “They were trying to find things that [Zuckerberg] could agree with Trump on, and it’s a pretty slim list,” said one of the people, describing how the company landed on its anti-China strategy. “If you’re not going to try to be in this country anyway, you might as well use it to your political advantage by contrasting yourself with Apple and TikTok.”
Javier E

Opinion | America, China and a Crisis of Trust - The New York Times - 0 views

  • some eye-popping new realities about what’s really eating away at U.S.-China relations.
  • The new, new thing has a lot to do with the increasingly important role that trust, and its absence, plays in international relations, now that so many goods and services that the United States and China sell to one another are digital, and therefore dual use — meaning they can be both a weapon and a tool.
  • In the last 23 years America has built exactly one sort-of-high-speed rail line, the Acela, serving 15 stops between Washington, D.C., and Boston. Think about that: 900 to 15.
  • ...53 more annotations...
  • it is easy to forget how much we have in common as people. I can’t think of any major nation after the United States with more of a Protestant work ethic and naturally capitalist population than China.
  • These days, it is extremely difficult for a visiting columnist to get anyone — a senior official or a Starbucks barista — to speak on the record. It was not that way a decade ago.
  • The Communist Party’s hold is also a product of all the hard work and savings of the Chinese people, which have enabled the party and the state to build world-class infrastructure and public goods that make life for China’s middle and lower classes steadily better.
  • Beijing and Shanghai, in particular, have become very livable cities, with the air pollution largely erased and lots of new, walkable green spaces.
  • some 900 cities and towns in China are now served by high-speed rail, which makes travel to even remote communities incredibly cheap, easy and comfortable
  • Just when trust has become more important than ever between the U.S. and China, it also has become scarcer than ever. Bad trend.
  • China’s stability is a product of both an increasingly pervasive police state and a government that has steadily raised standards of living. It’s a regime that takes both absolute control and relentless nation-building seriously.
  • For an American to fly from New York’s Kennedy Airport into Beijing Capital International Airport today is to fly from an overcrowded bus terminal to a Disney-like Tomorrowland.
  • China got an early jump on A.I. in two realms — facial recognition technology and health records — because there are virtually no privacy restrictions on the government’s ability to build huge data sets for machine learning algorithms to find patterns.
  • “ChatGPT is prompting some people to ask if the U.S. is rising again, like in the 1990s,”
  • “I understand your feeling: You have been in the first place for a century, and now China is rising, and we have the potential to become the first — and that is not easy for you,” Hu said to me. But “you should not try to stop China’s development. You can’t contain China in the end. We are quite smart. And very diligent. We work very hard. And we have 1.4 billion people.”
  • Before the Trump presidency, he added: “We never thought China-U.S. relations would ever become so bad. Now we gradually accept the situation, and most Chinese people think there is no hope for better relations. We think the relationship will be worse and worse and hope that war will not break out between our two countries.”
  • A lot of people hesitated when I asked. Indeed, many would answer with some version of “I’m not sure, I just know that it’s THEIR fault.”
  • t was repeated conversations like these that got me started asking American, Chinese and Taiwanese investors, analysts and officials a question that has been nagging at me for a while: What exactly are America and China fighting about?
  • the real answer is so much deeper and more complex than just the usual one-word response — “Taiwan” — or the usual three-word response — “autocracy versus democracy.”
  • Let me try to peel back the layers. The erosion in U.S.-China relations is a result of something old and obvious — a traditional great-power rivalry between an incumbent power (us) and a rising power (China) — but with lots of new twists
  • One of the twists, though, is that this standard-issue great-power rivalry is occurring between nations that have become as economically intertwined as the strands of a DNA molecule. As a result, neither China nor America has ever had a rival quite like the other.
  • in modern times, China, like America, has never had to deal with a true economic and military peer with which it was also totally intertwined through trade and investment.
  • Another new twist, and a reason it’s hard to define exactly what we’re fighting about, has a lot to do with how this elusive issue of trust and the absence of it have suddenly assumed much greater importance in international affairs.
  • This is a byproduct of our new technological ecosystem in which more and more devices and services that we both use and trade are driven by microchips and software, and connected through data centers in the cloud and high-speed internet
  • so many more things became “dual use.” That is, technologies that can easily be converted from civilian tools to military weapons, or vice versa.
  • no one country or company can own the whole supply chain. You need the best from everywhere, and that supply chain is so tightly intertwined that each company has to trust the others intimately.
  • when we install the ability to sense, digitize, connect, process, learn, share and act into more and more things — from your GPS-enabled phone to your car to your toaster to your favorite app — they all become dual use, either weapons or tools depending on who controls the software running them and who owns the data that they spin off.
  • As long as most of what China sold us was shallow goods, we did not care as much about its political system — doubly so because it seemed for a while as if China was slowly but steadily becoming more and more integrated with the world and slightly more open and transparent every year. So, it was both easy and convenient to set aside some of our worries about the dark sides of its political system.
  • when you want to sell us ‘deep goods’ — goods that are dual use and will go deep into our homes, bedrooms, industries, chatbots and urban infrastructure — we don’t have enough trust to buy them. So, we are going to ban Huawei and instead pay more to buy our 5G telecom systems from Scandinavian companies we do trust: Ericsson and Nokia.”
  • as we’ve seen in Ukraine, a smartphone can be used by Grandma to call the grandkids or to call a Ukrainian rocket-launching unit and give it the GPS coordinates of a Russian tank in her backyard.
  • So today, the country or countries that can make the fastest, most powerful and most energy efficient microchips can make the biggest A.I. computers and dominate in economics and military affairs.
  • As more and more products and services became digitized and electrified, the microchips that powered everything became the new oil. What crude oil was to powering 19th- and 20th-century economies, microchips are for powering 21st-century economies.
  • When you ask them what is the secret that enables TSMC to make 90 percent of the world’s most advanced logic chips — while China, which speaks the same language and shares the same recent cultural history, makes zero — their answer is simple: “trust.”
  • TSMC is a semiconductor foundry, meaning it takes the designs of the most advanced computer companies in the world — Apple, Qualcomm, Nvidia, AMD and others — and turns the designs into chips that perform different processing functions
  • TSMC makes two solemn oaths to its customers: TSMC will never compete against them by designing its own chips and it will never share the designs of one of its customers with another.
  • “Our business is to serve multiple competitive clients,” Kevin Zhang, senior vice president for business development at TSMC, explained to me. “We are committed not to compete with any of them, and internally our people who serve customer A will never leak their information to customer C.”
  • But by working with so many trusted partners, TSMC leverages the partners’ steadily more complex designs to make itself better — and the better it gets, the more advanced designs it can master for its customers. This not only requires incredibly tight collaboration between TSMC and its customers, but also between TSMC and its roughly 1,000 critical local and global suppliers.
  • As the physics of chip making gets more and more extreme, “the investment from customers is getting bigger and bigger, so they have to work with us more closely to make sure they harvest as much [computing power] as they can. They have to trust you.”
  • China also has a foundry, Semiconductor Manufacturing International Corporation, which is partly state-owned. But guess what? Because no global chip designers trust SMIC with their most advanced designs, it is at least a decade behind TSMC.
  • It’s for these reasons that the erosion in U.S.-China relations goes beyond our increasingly sharp disagreements over Taiwan. It is rooted in the fact that just when trust, and its absence, became much bigger factors in international affairs and commerce, China changed its trajectory. It made itself a less trusted partner right when the most important technology for the 21st century — semiconductors — required unprecedented degrees of trust to manufacture and more and more devices and services became deep and dual use.
  • when American trade officials said: “Hey, you need to live up to your W.T.O. commitments to restrict state-funding of industries,” China basically said: “Why should we live by your interpretation of the rules? We are now big enough to make our own interpretations. We’re too big; you’re too late.”
  • Combined with China’s failure to come clean on what it knew about the origins of Covid-19, its crackdown on democratic freedoms in Hong Kong and on the Uyghur Muslim minority in Xinjiang, its aggressive moves to lay claim to the South China Sea, its increasing saber rattling toward Taiwan, its cozying up to Vladimir Putin (despite his savaging of Ukraine), Xi’s moves toward making himself president for life, his kneecapping of China’s own tech entrepreneurs, his tighter restrictions on speech and the occasional abduction of a leading Chinese businessman — all of these added up to one very big thing: Whatever trust that China had built up with the West since the late 1970s evaporated at the exact moment in history when trust, and shared values, became more important than ever in a world of deep, dual-use products driven by software, connectivity and microchips.
  • it started to matter a lot more to Western nations generally and the United States in particular that this rising power — which we were now selling to or buying from all sorts of dual-use digital devices or apps — was authoritarian.
  • eijing, for its part, argues that as China became a stronger global competitor to America — in deep goods like Huawei 5G — the United States simply could not handle it and decided to use its control over advanced semiconductor manufacturing and other high-tech exports from America, as well as from our allies, to ensure China always remained in our rearview mirror
  • Beijing came up with a new strategy, called “dual circulation.” It said: We will use state-led investments to make everything we possibly can at home, to become independent of the world. And we will use our manufacturing prowess to make the world dependent on our exports.
  • Chinese officials also argue that a lot of American politicians — led by Trump but echoed by many in Congress — suddenly seemed to find it very convenient to put the blame for economic troubles in the U.S.’s middle class not on any educational deficiencies, or a poor work ethic, or automation or the 2008 looting by financial elites, and the crisis that followed, but on China’s exports to the United States.
  • As Beijing sees it, China not only became America’s go-to boogeyman, but in their frenzy to blame Beijing for everything, members of Congress started to more recklessly promote Taiwan’s independence.
  • Xi told President Biden at their summit in Bali in November, in essence: I will not be the president of China who loses Taiwan. If you force my hand, there will be war. You don’t understand how important this is to the Chinese people. You’re playing with fire.
  • at some level Chinese officials now understand that, as a result of their own aggressive actions in recent years on all the fronts I’ve listed, they have frightened both the world and their own innovators at precisely the wrong time.
  • I don’t buy the argument that we are destined for war. I believe that we are doomed to compete with each other, doomed to cooperate with each other and doomed to find some way to balance the two. Otherwise we are both going to have a very bad 21st century.
  • I have to say, though, Americans and Chinese remind me of Israelis and Palestinians in one respect: They are both expert at aggravating the other’s deepest insecurities.
  • China’s Communist Party is now convinced that America wants to bring it down, which some U.S. politicians are actually no longer shy about suggesting. So, Beijing is ready to crawl into bed with Putin, a war criminal, if that is what it takes to keep the Americans at bay.
  • Americans are now worried that Communist China, which got rich by taking advantage of a global market shaped by American rules, will use its newfound market power to unilaterally change those rules entirely to its advantage. So we’ve decided to focus our waning strength vis-à-vis Beijing on ensuring the Chinese will always be a decade behind us on microchips.
  • I don’t know what is sufficient to reverse these trends, but I think I know what is necessary.
  • If it is not the goal of U.S. foreign policy to topple the Communist regime in China, the United States needs to make that crystal clear, because I found a lot more people than ever before in Beijing think otherwise.
  • As for China, it can tell itself all it wants that it has not taken a U-turn in recent years. But no one is buying it. China will never realize its full potential — in a hyper-connected, digitized, deep, dual-use, semiconductor-powered world — unless it understands that establishing and maintaining trust is now the single most important competitive advantage any country or company can have. And Beijing is failing in that endeavor.
  • In his splendid biography of the great American statesman George Shultz, Philip Taubman quotes one of Shultz’s cardinal rules of diplomacy and life: “Trust is the coin of the realm.”
Javier E

Google Devising Radical Search Changes to Beat Back AI Rivals - The New York Times - 0 views

  • Google’s employees were shocked when they learned in March that the South Korean consumer electronics giant Samsung was considering replacing Google with Microsoft’s Bing as the default search engine on its devices.
  • Google’s reaction to the Samsung threat was “panic,” according to internal messages reviewed by The New York Times. An estimated $3 billion in annual revenue was at stake with the Samsung contract. An additional $20 billion is tied to a similar Apple contract that will be up for renewal this year.
  • A.I. competitors like the new Bing are quickly becoming the most serious threat to Google’s search business in 25 years, and in response, Google is racing to build an all-new search engine powered by the technology. It is also upgrading the existing one with A.I. features, according to internal documents reviewed by The Times.
  • ...14 more annotations...
  • Google has been worried about A.I.-powered competitors since OpenAI, a San Francisco start-up that is working with Microsoft, demonstrated a chatbot called ChatGPT in November. About two weeks later, Google created a task force in its search division to start building A.I. products,
  • Modernizing its search engine has become an obsession at Google, and the planned changes could put new A.I. technology in phones and homes all over the world.
  • Magi would keep ads in the mix of search results. Search queries that could lead to a financial transaction, such as buying shoes or booking a flight, for example, would still feature ads on their results pages.
  • Google has been doing A.I. research for years. Its DeepMind lab in London is considered one of the best A.I. research centers in the world, and the company has been a pioneer with A.I. projects, such as self-driving cars and the so-called large language models that are used in the development of chatbots. In recent years, Google has used large language models to improve the quality of its search results, but held off on fully adopting A.I. because it has been prone to generating false and biased statements.
  • Now the priority is winning control of the industry’s next big thing. Last month, Google released its own chatbot, Bard, but the technology received mixed reviews.
  • The system would learn what users want to know based on what they’re searching when they begin using it. And it would offer lists of preselected options for objects to buy, information to research and other information. It would also be more conversational — a bit like chatting with a helpful person.
  • The Samsung threat represented the first potential crack in Google’s seemingly impregnable search business, which was worth $162 billion last year.
  • Last week, Google invited some employees to test Magi’s features, and it has encouraged them to ask the search engine follow-up questions to judge its ability to hold a conversation. Google is expected to release the tools to the public next month and add more features in the fall, according to the planning document.
  • The company plans to initially release the features to a maximum of one million people. That number should progressively increase to 30 million by the end of the year. The features will be available exclusively in the United States.
  • Google has also explored efforts to let people use Google Earth’s mapping technology with help from A.I. and search for music through a conversation with a chatbot
  • A tool called GIFI would use A.I. to generate images in Google Image results.
  • Tivoli Tutor, would teach users a new language through open-ended A.I. text conversations.
  • Yet another product, Searchalong, would let users ask a chatbot questions while surfing the web through Google’s Chrome browser. People might ask the chatbot for activities near an Airbnb rental, for example, and the A.I. would scan the page and the rest of the internet for a response.
  • “If we are the leading search engine and this is a new attribute, a new feature, a new characteristic of search engines, we want to make sure that we’re in this race as well,”
Javier E

Tween trends get more expensive as they take cues from social media - The Washington Post - 0 views

  • While earlier generations might have taken their cues from classmates or magazines, tweens and teens now see their peers on platforms like TikTok, Pinterest, Instagram and YouTube.
  • And it’s spawning viral moments in retail, as evidenced by last week’s release of limited-edition Stanley tumblers at Target. Fans lined up outside stores before sunrise to nab the cup made in collaboration with Starbucks, and arguments broke out at a handful of locations. T
  • This age group also is snapping up pricey makeup and skin care, even products usually reserved for “mature” skin. That’s given rise to viral TikToks from exasperated adults.
  • ...19 more annotations...
  • The mania behind these products is heightened by their collectability and the sense of connection they offer, industry experts say.“Material things have always been markers of identity,” Drenten said.
  • It’s also compounded by biology — puberty and cognitive development can feel upending and confusing, said Mindy Weinstein, the founder and chief executive of digital marketing company Market MindShift. So buying into a trend or product — perhaps popularized by older teens — can ease those uncomfortable feelings.
  • It’s known as the “bandwagon effect, and it’s really pronounced in that age group,
  • “they aren’t always sure where they fit into the world. But now by buying that [item] they feel like they fit in.
  • Every generation of tween has had products, accessories, brands and styles they covet. A decade ago, it was Justice clothing, colorful iPod minis, Sidekick cellphones and EOS lip balm. In the early 2000s, Juicy sweatsuits, North Face fleece jackets, Nike Shox, Abercrombie & Fitch and Razr flip phones reigned. In the ’90s it was buying from the Delia’s catalogue magazine, Lip Smacker balms, United Colors of Benetton and Tommy Hilfiger polos. The ’80s had Guess jeans, Keds, banana hair clips and J. Crew sweaters. In the ’70s it was mood rings, Wrangler and Levi’s jeans, Puma sneakers and Frye boots.
  • More than half of U.S. teenagers (ages 13 to 19) spend at least four hours a day on social media, according to Gallup, and most of that time is spent on YouTube and TikTok
  • And it’s highly effective — consumers are more likely to consider buying a product and have a favorable opinion about it if it went vira
  • “TikTok influencers already have their trust … teens and tweens see them and they want to also be into that trend and feel like they’re belonging to that social group,
  • It used to be that our hair, makeup and skin care products were only visible to those who entered our bedrooms, scanning vanities and opening drawers. Now, teens and tweens are filming “Get ready with me” videos, showing off their Rare Beauty liquid blush ($23), Laneige lip balm ($18) and Charlotte Tilbury setting spray ($38) as they complain about school or recap a friend’s bat mitzvah.
  • Margeaux Richmond and her friends spend a lot of time talking about skin care. The 12-year-old from Des Moines said she got a $62 Drunk Elephant moisturizer for Christmas. “It’s kind of pricey, but if it’s good for your skin it’s worth it,” she said. “It’s kind of important to me and my friends because we don’t want our skin to look bad or anything.”
  • This also fuels a collectability culture. The customer no longer wants one water bottle, one pair of Air Jordans, one Summer Fridays lip balm or one Nike sweatshirt — they want them in every color.
  • “We have to think about today’s consumers, not as consumers, but as fans; and fandom has always been intertwined with collecting,” Drenten said. “In today’s culture, particularly among young people, we’ve kind of shifted away from obsession with celebrities to obsession with brands.”
  • Having and displaying a collection on shelves and on social media is seen as a status symbol.
  • Superfans also collect accessories for some of these products, Briggs said, spawning a whole side industry for some products.
  • Who’s doing the actual buying is harder to track. Not all adolescents have jobs or parents who are able or willing to spend $550 on Apple AirPods Max or $275 on a Tiffany & Co’s Pink Double Heart Tag Pendant necklace. “These products, to some extent, are a point of privilege and status,
  • Some of the spending could be attributed to more young people in the workforce: Roughly 37 percent of 16- to 19-year-olds had a job or were looking for one last year,
  • That’s the highest rate since 2009.
  • Richmond said she uses her babysitting money to buy Drunk Elephant skin care or Kendra Scott jewelry — items “my parents won’t buy me.” She’s saving up for her second Stanley tumbler.
  • Drenten emphasized that shopping or gift hauls on social media don’t reflect what every teen or tween wants. It varies by socioeconomics, demographics and personal preference. “At the end of the day, they can still be influenced by who they’re around and not necessarily what they’re seeing as the top line products online.”
« First ‹ Previous 161 - 180 of 187 Next ›
Showing 20 items per page