Skip to main content

Home/ History Readings/ Group items tagged Big Tech

Rss Feed Group items tagged

carolinehayter

Google Lawsuit Marks End Of Washington's Love Affair With Big Tech : NPR - 0 views

  • The U.S. Justice Department and 11 state attorneys general have filed a blockbuster lawsuit against Google, accusing it of being an illegal monopoly because of its stranglehold on Internet search.
  • The government alleged Google has come by its wild success — 80% market share in U.S. search, a valuation eclipsing $1 trillion — unfairly. It said multibillion-dollar deals Google has struck to be the default search engine in many of the world's Web browsers and smartphones have boxed out its rivals.
  • Google's head of global affairs, Kent Walker, said the government's case is "deeply flawed." The company warned that if the Justice Department prevails, people would pay more for their phones and have worse options for searching the Internet.
  • ...19 more annotations...
  • Just look at the word "Google," the lawsuit said — it's become "a verb that means to search the internet." What company can compete with that?
  • "It's been a relationship of extremes,"
  • a tectonic shift is happening right now: USA v. Google is the biggest manifestation of what has become known as the "Techlash" — a newfound skepticism of Silicon Valley's giants and growing appetite to rein them in through regulation.
  • "It's the end of hands-off of the tech sector," said Gene Kimmelman, a former senior antitrust official at the Justice Department. "It's probably the beginning of a decade of a series of lawsuits against companies like Google who dominate in the digital marketplace."
  • For years, under both Republican and Democratic administrations, Silicon Valley's tech stars have thrived with little regulatory scrutin
  • There is similar skepticism in Washington of Facebook, Amazon and Apple — the companies that, with Google, have become known as Big Tech, an echo of the corporate villains of earlier eras such as Big Oil and Big Tobacco.
  • All four tech giants have been under investigation by regulators, state attorneys general and Congress — a sharp shift from just a few years ago when many politicians cozied up to the cool kids of Silicon Valley.
  • Tech companies spend millions of dollars lobbying lawmakers, and many high-level government officials have left politics to work in tech,
  • It will likely be years before this fight is resolved.
  • She said Washington's laissez-faire attitude toward tech is at least partly responsible for the sector's expansion into nearly every aspect of our lives.
  • "These companies were allowed to grow large, in part because they had political champions on both sides of the aisle that really supported what they were doing and viewed a lot of what they were doing uncritically. And then ... these companies became so big and so powerful and so good at what they set out to do, it became something of a runaway train," she said.
  • The Google lawsuit is the most concrete action in the U.S. to date challenging the power of Big Tech. While the government stopped short of explicitly calling for a breakup, U.S. Associate Deputy Attorney General Ryan Shores said that "nothing's off the table."
  • "This case signals that the antitrust winter is over,"
  • other branches of government are also considering ways to bring these companies to heel. House Democrats released a sweeping report this month calling for new rules to strip Apple, Amazon, Facebook and Google of the power that has made each of them dominant in their fields. Their recommendations ranged from forced "structural separations" to reforming American antitrust law. Republicans, meanwhile, have channeled much of their ire into allegations that platforms such as Facebook and Twitter are biased against conservatives — a claim for which there is no conclusive evidence.
  • Congressional Republicans and the Trump administration are using those bias claims to push for an overhaul of Section 230 of the 1996 Communications Decency Act, a longstanding legal shield that protects online platforms from being sued over what people post on them and says they can't be punished for reasonable moderation of those posts.
  • The CEOs of Google, Facebook and Twitter are set to appear next week before the Senate Commerce Committee at a hearing about Section 230.
  • On the same day the Justice Department sued Google, two House Democrats, Anna Eshoo, whose California district includes large parts of Silicon Valley, and Tom Malinowski of New Jersey, introduced their own bill taking aim at Section 230. It would hold tech companies liable if their algorithms amplify or recommend "harmful, radicalizing content that leads to offline violence."
  • That means whichever party wins control of the White House and Congress in November, Big Tech should not expect the temperature in Washington to warm up.
  • Editor's note: Google, Facebook, Apple and Amazon are among NPR's financial supporters.
Javier E

Opinion | Biden Trade Policy Breaks With Tech Giants - The New York Times - 0 views

  • One reason that the idea of free trade has fallen out of fashion in recent years is the perception that trade agreements reflect the wishes of big American corporations, at everybody else’s expense.
  • U.S. officials fought for trade agreements that protect intellectual property — and drug companies got the chance to extend the life of patents, raising the price of medicine around the world. U.S. officials fought for investor protections — and mining companies got the right to sue for billions in “lost profit” if a country moved to protect its drinking water or the Amazon ecosystem. And for years, U.S. officials have fought for digital trade rules that allow data to move freely across national borders — prompting fears that the world’s most powerful tech companies would use those rules to stay ahead of competitors and shield themselves from regulations aimed at protecting consumers and privacy.
  • That’s why the Biden administration, which came into office promising to fight for trade agreements that better reflect the interests of ordinary people, has dropped its advocacy for tech-friendly digital trade rules that American officials have championed for more than a decade.
  • ...14 more annotations...
  • Last month, President Biden’s trade representative, Katherine Tai, notified the World Trade Organization that the American government no longer supported a proposal it once spearheaded that would have exported the American laissez-faire approach to tech. Had that proposal been adopted, it would have spared tech companies the headache of having to deal with many different domestic laws about how data must be handled, including rules mandating that it be stored or analyzed locally. It also would have largely shielded tech companies from regulations aimed at protecting citizens’ privacy and curbing monopolistic behavior.
  • The move to drop support for that digital trade agenda has been pilloried as disaster for American companies and a boon to China, which has a host of complicated restrictions on transferring data outside of China. “We have warned for years that either the United States would write the rules for digital trade or China would,” Senator Mike Crapo, a Republican from Idaho, lamented in a press statement. “Now, the Biden administration has decided to give China the pen.”
  • While some of this agenda is reasonable and good for the world — too much regulation stifles innovation — adopting this agenda wholesale would risk cementing the advantages that big American tech companies already enjoy and permanently distorting the market in their favor.
  • who used to answer the phone and interact with lobbyists at the U.S. trade representative’s office. The paper includes redacted emails between Trump-era trade negotiators and lobbyists for Facebook, Google, Microsoft and Amazon, exchanging suggestions for the proposed text for the policy on digital trade in the United States-Mexico-Canada Agreement. “While they were previously ‘allergic to Washington,’ as one trade negotiator described, over the course of a decade, technology companies hired lobbyists and joined trade associations with the goal of proactively influencing international trade policy,” Ms. Li wrote in the Socio-Economic Review.
  • That paper explains how U.S. trade officials came to champion a digital trade policy agenda that was nearly identical to what Google, Apple and Meta wanted: No restrictions on the flow of data across borders. No forced disclosure of source codes or algorithms in the normal course of business. No laws that would curb monopolies or encourage more competition — a position that is often cloaked in clauses prohibiting discrimination against American companies. (Since so many of the monopolistic big tech players are American, rules targeting such behavior disproportionately fall on American companies, and can be portrayed as unfair barriers to trade.)
  • The truth is that Ms. Tai is taking the pen away from Meta, Google and Amazon, which helped shape the previous policy, according to a research paper published this year by Wendy Li,
  • This approach essentially takes the power to regulate data out of the hands of governments and gives it to technology companies, according to research by Henry Gao, a Singapore-based expert on international trade.
  • Many smaller tech companies complain that big players engage in monopolistic behavior that should be regulated. For instance, Google has been accused of privileging its own products in search results, while Apple has been accused of charging some developers exorbitant fees to be listed in its App Store. A group of smaller tech companies called the Coalition for App Fairness thanked Ms. Tai for dropping support for the so-called tech-friendly agenda at the World Trade Organization.
  • Still, Ms. Tai’s reversal stunned American allies and foreign business leaders and upended negotiations over digital trade rules in the Indo-Pacific Economic Framework, one of Mr. Biden’s signature initiatives in Asia.
  • The about-face was certainly abrupt: Japan, Singapore and Australia — which supported the previous U.S. position — were left on their own. It’s unfortunate that U.S. allies and even some American officials were taken by surprise. But changing stances was the right call.
  • The previous American position at the World Trade Organization was a minority position. Only 34 percent of countries in the world have open data transfer policies like the United States, according to a 2021 World Bank working paper, while 57 percent have adopted policies like the European Union’s, which allow data to flow freely but leave room for laws that protect privacy and personal data.
  • Nine percent of countries have restrictive data transfer policies, including Russia and China.
  • The United States now has an opportunity to hammer out a sensible global consensus that gives tech companies what they need — clarity, more universal rules, and relative freedom to move data across borders — without shielding them from the kinds of regulations that might be required to protect society and competition in the future.
  • If the Biden administration can shepherd a digital agreement that strikes the right balance, there’s a chance that it will also restore faith in free trade by showing that trade agreements don’t have to be written by the powerful at the expense of the weak.
Javier E

The new tech worldview | The Economist - 0 views

  • Sam Altman is almost supine
  • the 37-year-old entrepreneur looks about as laid-back as someone with a galloping mind ever could. Yet the ceo of OpenAi, a startup reportedly valued at nearly $20bn whose mission is to make artificial intelligence a force for good, is not one for light conversation
  • Joe Lonsdale, 40, is nothing like Mr Altman. He’s sitting in the heart of Silicon Valley, dressed in linen with his hair slicked back. The tech investor and entrepreneur, who has helped create four unicorns plus Palantir, a data-analytics firm worth around $15bn that works with soldiers and spooks
  • ...25 more annotations...
  • a “builder class”—a brains trust of youngish idealists, which includes Patrick Collison, co-founder of Stripe, a payments firm valued at $74bn, and other (mostly white and male) techies, who are posing questions that go far beyond the usual interests of Silicon Valley’s titans. They include the future of man and machine, the constraints on economic growth, and the nature of government.
  • They share other similarities. Business provided them with their clout, but doesn’t seem to satisfy their ambition
  • The number of techno-billionaires in America (Mr Collison included) has more than doubled in a decade.
  • ome of them, like the Medicis in medieval Florence, are keen to use their money to bankroll the intellectual ferment
  • The other is Paul Graham, co-founder of Y Combinator, a startup accelerator, whose essays on everything from cities to politics are considered required reading on tech campuses.
  • Mr Altman puts it more optimistically: “The iPhone and cloud computing enabled a Cambrian explosion of new technology. Some things went right and some went wrong. But one thing that went weirdly right is a lot of people got rich and said ‘OK, now what?’”
  • A belief that with money and brains they can reboot social progress is the essence of this new mindset, making it resolutely upbeat
  • The question is: are the rest of them further evidence of the tech industry’s hubristic decadence? Or do they reflect the start of a welcome capacity for renewal?
  • Two well-known entrepreneurs from that era provided the intellectual seed capital for some of today’s techno nerds.
  • Mr Thiel, a would-be libertarian philosopher and investor
  • This cohort of eggheads starts from common ground: frustration with what they see as sluggish progress in the world around them.
  • In the 2000s Mr Thiel supported the emergence of a small community of online bloggers, self-named the “rationalists”, who were focused on removing cognitive biases from thinking (Mr Thiel has since distanced himself). That intellectual heritage dates even further back, to “cypherpunks”, who noodled about cryptography, as well as “extropians”, who believed in improving the human condition through life extensions
  • the rationalist movement has hit the mainstream. The result is a fascination with big ideas that its advocates believe goes beyond simply rose-tinted tech utopianism
  • A burgeoning example of this is “progress studies”, a movement that Mr Collison and Tyler Cowen, an economist and seer of the tech set, advocated for in an article in the Atlantic in 2019
  • Progress, they think, is a combination of economic, technological and cultural advancement—and deserves its own field of study
  • There are other examples of this expansive worldview. In an essay in 2021 Mr Altman set out a vision that he called “Moore’s Law for Everything”, based on similar logic to the semiconductor revolution. In it, he predicted that smart machines, building ever smarter replacements, would in the coming decades outcompete humans for work. This would create phenomenal wealth for some, obliterate wages for others, and require a vast overhaul of taxation and redistribution
  • His two bets, on OpenAI and nuclear fusion, have become fashionable of late—the former’s chatbot, ChatGPT, is all the rage. He has invested $375m in Helion, a company that aims to build a fusion reactor.
  • Mr Lonsdale, who shares a libertarian streak with Mr Thiel, has focused attention on trying to fix the shortcomings of society and government. In an essay this year called “In Defence of Us”, he argues against “historical nihilism”, or an excessive focus on the failures of the West.
  • With a soft spot for Roman philosophy, he has created the Cicero Institute in Austin that aims to inject free-market principles such as competition and transparency into public policy.
  • He is also bringing the startup culture to academia, backing a new place of learning called the University of Austin, which emphasises free speech.
  • All three have business ties to their mentors. As a teen, Mr Altman was part of the first cohort of founders in Mr Graham’s Y Combinator, which went on to back successes such as Airbnb and Dropbox. In 2014 he replaced him as its president, and for a while counted Mr Thiel as a partner (Mr Altman keeps an original manuscript of Mr Thiel’s book “Zero to One” in his library). Mr Thiel was also an early backer of Stripe, founded by Mr Collison and his brother, John. Mr Graham saw promise in Patrick Collison while the latter was still at school. He was soon invited to join Y Combinator. Mr Graham remains a fan: “If you dropped Patrick on a desert island, he would figure out how to reproduce the Industrial Revolution,”
  • While at university, Mr Lonsdale edited the Stanford Review, a contrarian publication co-founded by Mr Thiel. He went on to work for his mentor and the two men eventually helped found Palantir. He still calls Mr Thiel “a genius”—though he claims these days to be less “cynical” than his guru.
  • “The tech industry has always told these grand stories about itself,” says Adrian Daub of Stanford University and author of the book, “What Tech Calls Thinking”. Mr Daub sees it as a way of convincing recruits and investors to bet on their risky projects. “It’s incredibly convenient for their business models.”
  • Yet the impact could ultimately be positive. Frustrations with a sluggish society have encouraged them to put their money and brains to work on problems from science funding and the redistribution of wealth to entirely new universities. Their exaltation of science may encourage a greater focus on hard tech
  • Silicon Valley has shown an uncanny ability to reinvent itself in the past.
ethanshilling

San Francisco's Tech Workers Make the Big Move - The New York Times - 0 views

  • Rent was astronomical. Taxes were high. Your neighbors didn’t like you. If you lived in San Francisco, you might have commuted an hour south to your job at Apple or Google or Facebook.
  • Remote work offered a chance at residing for a few months in towns where life felt easier. Tech workers and their bosses realized they might not need all the perks and after-work schmooze events.
  • That’s where the story of the Bay Area’s latest tech era is ending for a growing crowd of tech workers and their companies. They have suddenly movable jobs and money in the bank — money that will go plenty further somewhere else.
  • ...9 more annotations...
  • The No. 1 pick for people leaving San Francisco is Austin, Texas, with other winners including Seattle, New York and Chicago, according to moveBuddha, a site that compiles data on moving.
  • The biggest tech companies aren’t going anywhere, and tech stocks are still soaring. Apple’s flying-saucer-shaped campus is not going to zoom away. Google is still absorbing ever more office space in San Jose and San Francisco. New founders are still coming to town.
  • But the migration from the Bay Area appears real. Residential rents in San Francisco are down 27 percent from a year ago, and the office vacancy rate has spiked to 16.7 percent, a number not seen in a decade.
  • Pinterest, which has one of the most iconic offices in town, paid $90 million to break a lease for a site where it planned to expand. And companies like Twitter and Facebook have announced “work from home forever” plans.
  • Now the local tech industry is rapidly expanding. Apple is opening a $1 billion, 133-acre campus. Alphabet, Amazon and Facebook have all either expanded their footprints in Austin or have plans to. Elon Musk, the Tesla founder and one of the two richest men in the world, said he had moved to Texas. Start-up investor money is arriving, too: The investors at 8VC and Breyer Capital opened Austin offices last year.
  • The San Francisco exodus means the talent and money of newly remote tech workers are up for grabs. And it’s not just the mayor of Miami trying to lure them in.
  • There are 33,000 members in the Facebook group Leaving California and 51,000 in its sister group, Life After California. People post pictures of moving trucks and links to Zillow listings in new cities.
  • If San Francisco of the 2010s proved anything, it’s the power of proximity. Entrepreneurs could find a dozen start-up pitch competitions every week within walking distance. If they left a big tech company, there were start-ups eager to hire, and if a start-up failed, there was always another.
  • No one leaving the city is arguing that a culture of innovation is going to spring up over Zoom. So some are trying to recreate it. They are getting into property development, building luxury tiny-home compounds and taking over big, funky houses in old resort towns.
Javier E

Big Tech Has Become Way Too Powerful - The New York Times - 0 views

  • CONSERVATIVES and liberals interminably debate the merits of “the free market” versus “the government.
  • The important question, too rarely discussed, is who has the most influence over these decisions and in that way wins the game.
  • Now information and ideas are the most valuable forms of property. Most of the cost of producing it goes into discovering it or making the first copy. After that, the additional production cost is often zero. Such “intellectual property” is the key building block of the new economy
  • ...14 more annotations...
  • as has happened before with other forms of property, the most politically influential owners of the new property are doing their utmost to increase their profits by creating monopolies
  • The most valuable intellectual properties are platforms so widely used that everyone else has to use them, too. Think of standard operating systems like Microsoft’s Windows or Google’s Android; Google’s search engine; Amazon’s shopping system; and Facebook’s communication network
  • Despite an explosion in the number of websites over the last decade, page views are becoming more concentrated. While in 2001, the top 10 websites accounted for 31 percent of all page views in America, by 2010 the top 10 accounted for 75 percent
  • Amazon is now the first stop for almost a third of all American consumers seeking to buy anything
  • Google and Facebook are now the first stops for many Americans seeking news — while Internet traffic to much of the nation’s newspapers, network television and other news gathering agencies has fallen well below 50 percent of all traffic.
  • almost all of the profits go to the platforms’ owners, who have all of the bargaining power
  • The rate at which new businesses have formed in the United States has slowed markedly since the late 1970s. Big Tech’s sweeping patents, standard platforms, fleets of lawyers to litigate against potential rivals and armies of lobbyists have created formidable barriers to new entrants
  • The law gives 20 years of patent protection to inventions that are “new and useful,” as decided by the Patent and Trademark Office. But the winners are big enough to game the system. They make small improvements warranting new patents, effectively making their intellectual property semipermanent.
  • They also lay claim to whole terrains of potential innovation including ideas barely on drawing boards and flood the system with so many applications that lone inventors have to wait years.
  • Big Tech has been almost immune to serious antitrust scrutiny, even though the largest tech companies have more market power than ever. Maybe that’s because they’ve accumulated so much political power.
  • Economic and political power can’t be separated because dominant corporations gain political influence over how markets are maintained and enforced, which enlarges their economic power further. One of the original goals of antitrust law was to prevent this.
  • We are now in a new gilded age similar to the first Gilded Age, when the nation’s antitrust laws were enacted. As then, those with great power and resources are making the “free market” function on their behalf. Big Tech — along with the drug, insurance, agriculture and financial giants — dominates both our economy and our politics.
  • The real question is how government organizes the market, and who has the most influence over its decisions
  • Yet as long as we remain obsessed by the debate over the relative merits of the “free market” and “government,” we have little hope of seeing what’s occurring and taking the action that’s needed to make our economy work for the many, not the few.
Javier E

Silicon Valley's Trillion-Dollar Leap of Faith - The Atlantic - 0 views

  • Tech companies like to make two grand pronouncements about the future of artificial intelligence. First, the technology is going to usher in a revolution akin to the advent of fire, nuclear weapons, and the internet.
  • And second, it is going to cost almost unfathomable sums of money.
  • Silicon Valley has already triggered tens or even hundreds of billions of dollars of spending on AI, and companies only want to spend more.
  • ...22 more annotations...
  • Their reasoning is straightforward: These companies have decided that the best way to make generative AI better is to build bigger AI models. And that is really, really expensive, requiring resources on the scale of moon missions and the interstate-highway system to fund the data centers and related infrastructure that generative AI depends on
  • “If we’re going to justify a trillion or more dollars of investment, [AI] needs to solve complex problems and enable us to do things we haven’t been able to do before.” Today’s flagship AI models, he said, largely cannot.
  • Now a number of voices in the finance world are beginning to ask whether all of this investment can pay off. OpenAI, for its part, may lose up to $5 billion this year, almost 10 times more than what the company lost in 2022,
  • Over the past few weeks, analysts and investors at some of the world’s most influential financial institutions—including Goldman Sachs, Sequoia Capital, Moody’s, and Barclays—have issued reports that raise doubts about whether the enormous investments in generative AI will be profitable.
  • Dario Amodei, the CEO of the rival start-up Anthropic, has predicted that a single AI model (such as, say, GPT-6) could cost $100 billion to train by 2027. The global data-center buildup over the next few years could require trillions of dollars from tech companies, utilities, and other industries, according to a July report from Moody’s Ratings.
  • generative AI has already done extraordinary things, of course—advancing drug development, solving challenging math problems, generating stunning video clips. But exactly what uses of the technology can actually make money remains unclear
  • At present, AI is generally good at doing existing tasks—writing blog posts, coding, translating—faster and cheaper than humans can. But efficiency gains can provide only so much value, boosting the current economy but not creating a new one.
  • Right now, Silicon Valley might just functionally be replacing some jobs, such as customer service and form-processing work, with historically expensive software, which is not a recipe for widespread economic transformation.
  • McKinsey has estimated that generative AI could eventually add almost $8 trillion to the global economy every year
  • “Here, we can manufacture intelligence.”
  • Tony Kim, the head of technology investment at BlackRock, the world’s largest money manager, told me he believes that AI will trigger one of the most significant technological upheavals ever. “Prior industrial revolutions were never about intelligence,”
  • this future is not guaranteed. Many of the productivity gains expected from AI could be both greatly overestimated and very premature, Daron Acemoglu, an economist at MIT, has found
  • AI products’ key flaws, such as a tendency to invent false information, could make them unusable, or deployable only under strict human oversight, in certain settings—courts, hospitals, government agencies, schools
  • AI as a truly epoch-shifting technology, it may well be more akin to blockchain, a very expensive tool destined to fall short of promises to fundamentally transform society and the economy.
  • Researchers at Barclays recently calculated that tech companies are collectively paying for enough AI-computing infrastructure to eventually power 12,000 different ChatGPTs. Silicon Valley could very well produce a whole host of hit generative-AI products like ChatGPT, “but probably not 12,000 of them,
  • even if it did, there would be nowhere enough demand to use all those apps and actually turn a profit.
  • Some of the largest tech companies’ current spending on AI data centers will require roughly $600 billion of annual revenue to break even, of which they are currently about $500 billion short.
  • Tech proponents have responded to the criticism that the industry is spending too much, too fast, with something like religious dogma. “I don’t care” how much we spend, Altman has said. “I genuinely don’t.
  • the industry is asking the world to engage in something like a trillion-dollar tautology: AI’s world-transformative potential justifies spending any amount of resources, because its evangelists will spend any amount to make AI transform the world.
  • in the AI era in particular, a lack of clear evidence for a healthy return on investment may not even matter. Unlike the companies that went bust in the dot-com bubble in the early 2000s, Big Tech can spend exorbitant sums of money and be largely fine
  • perhaps even more important in Silicon Valley than a messianic belief in AI is a terrible fear of missing out. “In the tech industry, what drives part of this is nobody wants to be left behind. Nobody wants to be seen as lagging,
  • Go all in on AI, the thinking goes, or someone else will. Their actions evince “a sense of desperation,” Cahn writes. “If you do not move now, you will never get another chance.” Enormous sums of money are likely to continue flowing into AI for the foreseeable future, driven by a mix of unshakable confidence and all-consuming fear.
Javier E

Start-Ups Hoping to Fight Climate Change Struggle as Other Tech Firms Cash In - The New... - 0 views

  • The last time venture capitalists invested heavily in environmentally focused technology during the so-called clean-tech boom of the 2000s, they lost a lot of money. Getting one of these companies off the ground can be expensive
  • “Sitting on your pile of money while the oceans are rising may not help you stay dry,”
  • It is common wisdom in the tech industry that it is much easier to raise money for a software company than it is for a start-up that wants to work in biotechnology or energy
  • ...22 more annotations...
  • Total funding for clean-tech start-ups fell during most of the past decade
  • But there are dozens, if not hundreds, of start-ups developing new technologies that address the issue.
  • Two major scientific organizations said last fall that even if greenhouse-gas emissions were reduced significantly, stopping drastic global warming would require technological breakthroughs that allowed for the removal of billions of tons of carbon dioxide already in the atmosphere.
  • Some promising methods for accomplishing that involve old-fashioned technologies, like planting trees and changing the ways farmers till their fields
  • In 2018, $6.6 billion was invested in clean tech, about 15 percent of what went to software start-ups. Carbon-removal start-ups got a tiny sliver of that.
  • So far, no one has found an obvious way to turn capturing carbon dioxide into a profitable business.
  • Noah Deich, the founder of Carbon180, a nonprofit that sponsored the event, said it was encouraging to see investors there. But he said he had not seen the commitment to investing that he believed was necessary to get the technologies working.
  • “For an internet company, even if you don’t have a real product, you can get money to develop one,” he said. “Here, it’s the opposite.
  • “It is tackling big markets and big challenges, but that doesn’t necessarily mean that those are going to be big businesses,”
  • a broad array of investors, including venture capitalists, will need to get involved. And they will need to wait more than three or four years to cash out
  • Mr. Oros said that his fund had not made an investment in the sector and that he did not see a way for the industry to take off without government policy encouraging it
  • for these businesses to succeed it would probably be necessary for governments to create a carbon tax or other subsidies as incentives for new businesses.
  • Mr. Lackner said investors should assume that governments would be willing at some point to pay for what these companies were doing.
  • “In the end, there is no way for the market to not exist,” he said. “This will be a brand-new industry at a huge scale.”
  • In the time it took Carbon Engineering to raise one round of $68 million, Slack, a messaging company founded the same year, has raised more than 10 times as much and is now preparing for an initial public offering that could value it at nearly $20 billion.
  • Everyone who discusses the difficulties these start-ups face points back to the clean-tech boom, when several venture capital firms put billions of dollars into solar energy and other technologies. While solar power has gained traction, most of the clean-tech funds were viewed as failures.
  • venture capitalists needed their investments to show returns within a few years
  • “There is a fundamental mismatch in time lines,”
  • One of the biggest investors in climate-focused start-ups is Breakthrough Energy Ventures, a $1 billion fund that seeks to support the development of world-saving technology that might not have a quick turnaround. The fund has received money from Bill Gates and several other billionaires.
  • money from major philanthropists would not be enough to get even one start-up up to speed, much less the dozens needed to meet the carbon-reduction goals set by international bodies like the Intergovernmental Panel on Climate Change
  • Ocean-Based Climate Solutions, has created a device that stirs up water in the ocean to promote the growth of phytoplankton, which are algae that can take carbon dioxide out of the air and deliver it to the bottom of the sea in solid form.
  • “We don’t need another photo-sharing app or another blockchain start-up,” said Mr. Rogers, who is investing his money through Incite Ventures, a fund he created with his wife, Swati Mylavarapu. “We need to solve the carbon crisis. But a lot of folks are chasing the easy money rather than taking responsibility for what needs to be done.”
Javier E

Amazon Prime Day Is Dystopian - The Atlantic - 0 views

  • hen Prime was introduced, in 2005, Amazon was relatively small, and still known mostly for books. As the company’s former director of ordering, Vijay Ravindran, told Recode’s Jason Del Rey in 2019, Prime “was brilliant. It made Amazon the default.”
  • It created incentives for users to be loyal to Amazon, so they could recoup the cost of membership, then $79 for unlimited two-day shipping. It also enabled Amazon to better track the products they buy and, when video streaming was added as a perk in 2011, the shows they watch, in order to make more things that the data indicated people would want to buy and watch, and to surface the things they were most likely to buy and watch at the very top of the page.
  • And most important, Prime habituated consumers to a degree of convenience, speed, and selection that, while unheard-of just years before, was made standard virtually overnight.
  • ...26 more annotations...
  • “It is genius for the current consumer culture,” Christine Whelan, a clinical professor of consumer science at the University of Wisconsin at Madison, told me. “It encourages and then meets the need for the thing, so we then continue on the hedonic treadmill: Buy the latest thing we want and then have it delivered immediately and then buy the next latest thing.”
  • With traditional retail, “there’s the friction of having to go to the store, there’s the friction of will the store have it, there’s the friction of carrying it,” Whelan said. “There’s the friction of having to admit to another human being that you’re buying it. And when you remove the friction, you also remove a lot of individual self-control. The more you are in the ecosystem and the easier it is to make a purchase, the easier it is to say yes to your desire rather than no.”
  • “It used to be that being a consumer was all about choice,”
  • But now, “two-thirds of people start their product searches on Amazon.
  • Prime discourages comparison shopping—looking around is pointless when everything you need is right here—even as Amazon’s sheer breadth of products makes shoppers feel as if they have agency.
  • “Consumerism has become a key way that people have misidentified freedom,”
  • what Amazon represents is a corporate infrastructure that is increasingly directed at getting as many consumers as possible locked into a consumerist process—an Amazon consumer for life.”
  • Amazon offers steep discounts to college students and new parents, two groups that are highly likely to change their buying behavior. It keeps adding more discounts and goodies to the Prime bundle, making subscribing ever more appealing. And, in an especially sinister move, it makes quitting Prime maddeningly difficult.
  • As subscription numbers grew through the 2010s, the revenue from them helped Amazon pump more money into building fulfillment centers (to get products to people even faster), acquiring new businesses (to control even more of the global economy), and adding more perks to the bundle (to encourage more people to sign up)
  • In 2019, Amazon shaved a full day off its delivery time, making one-day shipping the default, and also making Prime an even more tantalizing proposition: Why hop in the car for anything at all when you could get it delivered tomorrow, for free?
  • the United States now has more Prime memberships than households. In 2020,
  • Amazon’s revenue from subscriptions alone—mostly Prime—was $25.2 billion, which is a 31 percent increase from the previous year
  • Thanks in large part to the revenue from Prime subscriptions and from the things subscribers buy, Amazon’s value has multiplied roughly 97 times, to $1.76 trillion, since the service was introduced. Amazon is the second-largest private employer in the United States, after Walmart, and it is responsible for roughly 40 percent of all e-commerce in the United States.
  • It controls hundreds of millions of square feet across the country and is opening more fulfillment centers all the time. It has acquired dozens of other companies, most recently the film studio MGM for $8.5 billion. Its cloud-computing operation, Amazon Web Services, is the largest of its kind and provides the plumbing for a vast swath of the internet, to a profit of $13.5 billion last year.
  • Amazon has entered some 40 million American homes in the form of the Alexa smart speaker, and some 150 million American pockets in the form of the Amazon app
  • “Amazon is a beast we’ve never seen before,” Alimahomed-Wilson told me. “Amazon powers our Zoom calls. It contracts with ICE. It’s in our neighborhoods. This is a very different thing than just being a large retailer, like Walmart or the Ford Motor Company.”
  • I find it useful to compare Big Tech to climate change, another force that is altering the destiny of everyone on Earth, forever. Both present themselves to us all the time in small ways—a creepy ad here, an uncommonly warm November there—but are so big, so abstract, so everywhere that they’re impossible for any one person to really understand
  • Both are the result of a decades-long, very human addiction to consumption and convenience that has been made grotesque and extreme by the incentives and mechanisms of the internet, market consolidation, and economic stratification
  • Both have primarily been advanced by a small handful of very big companies that are invested in making their machinations unseeable to the naked eye.
  • Speed and convenience aren’t actually free; they never are. Free shipping isn’t free either. It just obscures the real price.
  • Next-day shipping comes with tremendous costs: for labor and logistics and transportation and storage; for the people who pack your stuff into those smiling boxes and for the people who deliver them; for the planes and trucks and vans that carry them; for the warehouses that store them; for the software ensuring that everything really does get to your door on time, for air-conditioning and gas and cardboard and steel. Amazon—Prime in particular—has done a superlative job of making all those costs, all those moving parts, all those externalities invisible to the consumer.
  • The pandemic drove up demand for Amazon, and for labor: Last year, company profits shot up 70 percent, Bezos’s personal wealth grew by $70 billion, and 1,400 people a day joined the company’s workforce.
  • Amazon is so big that every sector of our economy has bent to respond to the new way of consuming that it invented. Prime isn’t just bad for Amazon’s workers—it’s bad for Target’s, and Walmart’s. It’s bad for the people behind the counter at your neighborhood hardware store and bookstore, if your neighborhood still has a hardware store and a bookstore. Amazon has accustomed shoppers to a pace and manner of buying that depends on a miracle of precision logistics even when it’s managed by one of the biggest companies on Earth. For the smaller guys, it’s downright impossible.
  • “Every decision we make is based upon the fact that Amazon can get these books cheaper and faster. The prevailing expectation is you can get anything online shipped for”— he scrunched his fingers into air quotes—“‘free,’ in one or two days. And there’s really only one company that can do that. They do that because they’re willing to push and exploit their workers.”
  • Just as abstaining from flying for moral reasons won’t stop sea-level rise, one person canceling Prime won’t do much of anything to a multinational corporation’s bottom line. “It’s statistically insignificant to Amazon. They’ll never feel it,” Caine told me. But, he said, “the small businesses in your neighborhood will absolutely feel the addition of a new customer. Individual choices do make a big difference to them.”
  • Whelan teaches a class at UW called Consuming Happiness, and she is fond of giving her students the adage that you can buy happiness—“if you spend your money in keeping with your values: spending prosocially, on experiences. Tons of research shows us this.”
Javier E

The End of the Silicon Valley Myth - The Atlantic - 0 views

  • These companies, launched with promises to connect the world, to think different, to make information free to all, to democratize technology, have spent much of the past decade making the sorts of moves that large corporations trying to grow ever larger have historically made—embracing profit over safety, market expansion over product integrity, and rent seeking over innovation—but at much greater scale, speed, and impact. Now, ruled by monopolies, marred by toxicity, and overly reliant on precarious labor, Silicon Valley looks like it’s finally run hard up into its limits.
  • They’re failing utterly to create the futures they’ve long advertised, or even to maintain the versions they were able to muster. Having scaled to immense size, they’re unable or unwilling to manage the digital communities they’ve built
  • They’re paralyzed when it comes to product development and reduced to monopolistic practices such as charging rents and copying or buying up smaller competitors
  • ...10 more annotations...
  • Their policies tend to please no one; it’s a common refrain that antipathy toward Big Tech companies is one of the few truly bipartisan issues
  • You can just feel it, the cumulative weight of this stagnation, in the tech that most of us encounter every day. The act of scrolling past the same dumb ad to peer at the same bad news on the same glass screen on the same social network: This is the stuck future. There is a sense that we have reached the end of the internet, and no one wants to be left holding the bag
  • There’s a palpable exhaustion with the whole enterprise, with the men who set out to build the future or at least get rich, and who accomplished only one and a half of those things.
  • YouTube, meanwhile, is facing many of the same policy quagmires as Facebook and Twitter, especially when it comes to content moderation—and similarly failing to meaningfully address them.
  • It’s not just social media that’s in decline, already over, or worse.
  • As its mighty iPhone sales figures have plateaued and its business has grown more conservative—it hasn’t released a culturally significant new product line since 2016’s AirPods—Apple has begun to embrace advertising.
  • as Google has consolidated its monopoly, the quality of its flagship search product has gotten worse. Result pages are cluttered with ads that must be scrolled through in order to find the “‘organic”’ items, and there’s reason to think the quality of the results has gotten worse over time as well.
  • The big social networks are stuck. And there is little profit incentive to get them unstuck. That, after all, would require investing heavily in content moderators, empowering trust and safety teams, and penalizing malicious viral content that brings in huge traffic.
  • What a grim outcome for the internet, where the possibilities were once believed to be endless and where users were promised an infinite spectrum of possibility to indulge their creativity, build robust communities, and find their best expression, even when they could not do so in the real world
  • Big Tech, of course, never predicated its business models on enabling any of that, though its advertising and sloganeering may have suggested otherwise. Rather, companies’ ambitions were always focused on being the biggest: having the most users, selling the most devices, locking the most people into their walled gardens and ecosystems. The stuckness we’re seeing is the result of some of the most ambitious companies of our generation succeeding wildly yet having no vision beyond scale—no serious interest in engaging the civic and social dimensions of their projects.
Javier E

'We will coup whoever we want!': the unbearable hubris of Musk and the billionaire tech... - 0 views

  • there’s something different about today’s tech titans, as evidenced by a rash of recent books. Reading about their apocalypse bunkers, vampiric longevity strategies, outlandish social media pronouncements, private space programmes and virtual world-building ambitions, it’s hard to remember they’re not actors in a reality series or characters from a new Avengers movie.
  • Unlike their forebears, contemporary billionaires do not hope to build the biggest house in town, but the biggest colony on the moon. In contrast, however avaricious, the titans of past gilded eras still saw themselves as human members of civil society.
  • The ChatGPT impresario Sam Altman, whose board of directors sacked him as CEO before he made a dramatic comeback this week, wants to upload his consciousness to the cloud (if the AIs he helped build and now fears will permit him).
  • ...19 more annotations...
  • Contemporary billionaires appear to understand civics and civilians as impediments to their progress, necessary victims of the externalities of their companies’ growth, sad artefacts of the civilisation they will leave behind in their inexorable colonisation of the next dimension
  • on an individual basis today’s tech billionaires are not any wealthier than their early 20th-century counterparts. Adjusted for inflation, John Rockefeller’s fortune of $336bn and Andrew Carnegie’s $309bn exceed Musk’s $231bn, Bezos’s $165bn and Gates’s $114bn.
  • as chronicled by Peter Turchin in End Times, his book on elite excess and what it portends, today there are far more centimillionaires and billionaires than there were in the gilded age, and they have collectively accumulated a much larger proportion of the world’s wealth
  • In 1983, there were 66,000 households worth at least $10m in the US. By 2019, that number had increased in terms adjusted for inflation to 693,000
  • Back in the industrial age, the rate of total elite wealth accumulation was capped by the limits of the material world. They could only build so many railroads, steel mills and oilwells at a time. Virtual commodities such as likes, views, crypto and derivatives can be replicated exponentially.
  • Digital businesses depend on mineral slavery in Africa, dump toxic waste in China, facilitate the undermining of democracy across the globe and spread destabilising disinformation for profit – all from the sociopathic remove afforded by remote administration.
  • Zuckerberg had to go all the way back to Augustus Caesar for a role model, and his admiration for the emperor borders on obsession. He models his haircut on Augustus; his wife joked that three people went on their honeymoon to Rome: Mark, Augustus and herself; he named his second daughter August; and he used to end Facebook meetings by proclaiming “Domination!”
  • Zuckerberg told the New Yorker “through a really harsh approach, he established two hundred years of world peace”, finally acknowledging “that didn’t come for free, and he had to do certain things”. It’s that sort of top down thinking that led Zuckerberg to not only establish an independent oversight board at Facebook, dubbed the “Supreme Court”, but to suggest that it would one day expand its scope to include companies across the industry.
  • In response to the accusation that the US government organised a coup against Evo Morales in Bolivia in order for Tesla to secure lithium there, Musk tweeted: “We will coup whoever we want! Deal with it.”
  • Today’s billionaire philanthropists, frequently espousing the philosophy of “effective altruism”, donate to their own organisations, often in the form of their own stock, and make their own decisions about how the money is spent because they are, after all, experts in everything
  • Their words and actions suggest an approach to life, technology and business that I have come to call “The Mindset” – a belief that with enough money, one can escape the harms created by earning money in that way. It’s a belief that with enough genius and technology, they can rise above the plane of mere mortals and exist on an entirely different level, or planet, altogether.
  • By combining a distorted interpretation of Nietzsche with a pretty accurate one of Ayn Rand, they end up with a belief that while “God is dead”, the übermensch of the future can use pure reason to rise above traditional religious values and remake the world “in his own interests”
  • Nietzsche’s language, particularly out of context, provides tech übermensch wannabes with justification for assuming superhuman authority. In his book Zero to One, Thiel directly quotes Nietzsche to argue for the supremacy of the individual: “madness is rare in individuals, but in groups, parties, nations, and ages it is the rule”.
  • In Thiel’s words: “I no longer believe that freedom and democracy are compatible.”
  • This distorted image of the übermensch as a godlike creator, pushing confidently towards his clear vision of how things should be, persists as an essential component of The Mindset
  • Any new business idea, Thiel says, should be an order of magnitude better than what’s already out there. Don’t compare yourself to everyone else; instead operate one level above the competing masses
  • For Thiel, this requires being what he calls a “definite optimist”. Most entrepreneurs are too process-oriented, making incremental decisions based on how the market responds. They should instead be like Steve Jobs or Elon Musk, pressing on with their singular vision no matter what. The definite optimist doesn’t take feedback into account, but ploughs forward with his new design for a better world.
  • This is not capitalism, as Yanis Varoufakis explains in his new book Technofeudalism. Capitalists sought to extract value from workers by disconnecting them from the value they created, but they still made stuff. Feudalists seek an entirely passive income by “going meta” on business itself. They are rent-seekers, whose aim is to own the very platform on which other people do the work.
  • The antics of the tech feudalists make for better science fiction stories than they chart legitimate paths to sustainable futures.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Tech C.E.O.s Are in Love With Their Principal Doomsayer - The New York Times - 0 views

  • The futurist philosopher Yuval Noah Harari worries about a lot.
  • He worries that Silicon Valley is undermining democracy and ushering in a dystopian hellscape in which voting is obsolete.
  • He worries that by creating powerful influence machines to control billions of minds, the big tech companies are destroying the idea of a sovereign individual with free will.
  • ...27 more annotations...
  • He worries that because the technological revolution’s work requires so few laborers, Silicon Valley is creating a tiny ruling class and a teeming, furious “useless class.”
  • If this is his harrowing warning, then why do Silicon Valley C.E.O.s love him so
  • When Mr. Harari toured the Bay Area this fall to promote his latest book, the reception was incongruously joyful. Reed Hastings, the chief executive of Netflix, threw him a dinner party. The leaders of X, Alphabet’s secretive research division, invited Mr. Harari over. Bill Gates reviewed the book (“Fascinating” and “such a stimulating writer”) in The New York Times.
  • it’s insane he’s so popular, they’re all inviting him to campus — yet what Yuval is saying undermines the premise of the advertising- and engagement-based model of their products,
  • Part of the reason might be that Silicon Valley, at a certain level, is not optimistic on the future of democracy. The more of a mess Washington becomes, the more interested the tech world is in creating something else
  • he brought up Aldous Huxley. Generations have been horrified by his novel “Brave New World,” which depicts a regime of emotion control and painless consumption. Readers who encounter the book today, Mr. Harari said, often think it sounds great. “Everything is so nice, and in that way it is an intellectually disturbing book because you’re really hard-pressed to explain what’s wrong with it,” he said. “And you do get today a vision coming out of some people in Silicon Valley which goes in that direction.”
  • The story of his current fame begins in 2011, when he published a book of notable ambition: to survey the whole of human existence. “Sapiens: A Brief History of Humankind,” first released in Hebrew, did not break new ground in terms of historical research. Nor did its premise — that humans are animals and our dominance is an accident — seem a likely commercial hit. But the casual tone and smooth way Mr. Harari tied together existing knowledge across fields made it a deeply pleasing read, even as the tome ended on the notion that the process of human evolution might be over.
  • He followed up with “Homo Deus: A Brief History of Tomorrow,” which outlined his vision of what comes after human evolution. In it, he describes Dataism, a new faith based around the power of algorithms. Mr. Harari’s future is one in which big data is worshiped, artificial intelligence surpasses human intelligence, and some humans develop Godlike abilities.
  • Now, he has written a book about the present and how it could lead to that future: “21 Lessons for the 21st Century.” It is meant to be read as a series of warnings. His recent TED Talk was called “Why fascism is so tempting — and how your data could power it.”
  • At the Alphabet talk, Mr. Harari had been accompanied by his publisher. They said that the younger employees had expressed concern about whether their work was contributing to a less free society, while the executives generally thought their impact was positive
  • Some workers had tried to predict how well humans would adapt to large technological change based on how they have responded to small shifts, like a new version of Gmail. Mr. Harari told them to think more starkly: If there isn’t a major policy intervention, most humans probably will not adapt at all.
  • It made him sad, he told me, to see people build things that destroy their own societies, but he works every day to maintain an academic distance and remind himself that humans are just animals. “Part of it is really coming from seeing humans as apes, that this is how they behave,” he said, adding, “They’re chimpanzees. They’re sapiens. This is what they do.”
  • this summer, Mark Zuckerberg, who has recommended Mr. Harari to his book club, acknowledged a fixation with the autocrat Caesar Augustus. “Basically,” Mr. Zuckerberg told The New Yorker, “through a really harsh approach, he established 200 years of world peace.”
  • He said he had resigned himself to tech executives’ global reign, pointing out how much worse the politicians are. “I’ve met a number of these high-tech giants, and generally they’re good people,” he said. “They’re not Attila the Hun. In the lottery of human leaders, you could get far worse.”
  • Some of his tech fans, he thinks, come to him out of anxiety. “Some may be very frightened of the impact of what they are doing,” Mr. Harari said
  • as he spoke about meditation — Mr. Harari spends two hours each day and two months each year in silence — he became commanding. In a region where self-optimization is paramount and meditation is a competitive sport, Mr. Harari’s devotion confers hero status.
  • He told the audience that free will is an illusion, and that human rights are just a story we tell ourselves. Political parties, he said, might not make sense anymore. He went on to argue that the liberal world order has relied on fictions like “the customer is always right” and “follow your heart,” and that these ideas no longer work in the age of artificial intelligence, when hearts can be manipulated at scale.
  • Everyone in Silicon Valley is focused on building the future, Mr. Harari continued, while most of the world’s people are not even needed enough to be exploited. “Now you increasingly feel that there are all these elites that just don’t need me,” he said. “And it’s much worse to be irrelevant than to be exploited.”
  • The useless class he describes is uniquely vulnerable. “If a century ago you mounted a revolution against exploitation, you knew that when bad comes to worse, they can’t shoot all of us because they need us,” he said, citing army service and factory work.
  • Now it is becoming less clear why the ruling elite would not just kill the new useless class. “You’re totally expendable,” he told the audience.
  • This, Mr. Harari told me later, is why Silicon Valley is so excited about the concept of universal basic income, or stipends paid to people regardless of whether they work. The message is: “We don’t need you. But we are nice, so we’ll take care of you.”
  • On Sept. 14, he published an essay in The Guardian assailing another old trope — that “the voter knows best.”
  • “If humans are hackable animals, and if our choices and opinions don’t reflect our free will, what should the point of politics be?” he wrote. “How do you live when you realize … that your heart might be a government agent, that your amygdala might be working for Putin, and that the next thought that emerges in your mind might well be the result of some algorithm that knows you better than you know yourself? These are the most interesting questions humanity now faces.”
  • Today, they have a team of eight based in Tel Aviv working on Mr. Harari’s projects. The director Ridley Scott and documentarian Asif Kapadia are adapting “Sapiens” into a TV show, and Mr. Harari is working on children’s books to reach a broader audience.
  • Being gay, Mr. Harari said, has helped his work — it set him apart to study culture more clearly because it made him question the dominant stories of his own conservative Jewish society. “If society got this thing wrong, who guarantees it didn’t get everything else wrong as well?” he said
  • “If I was a superhuman, my superpower would be detachment,” Mr. Harari added. “O.K., so maybe humankind is going to disappear — O.K., let’s just observe.”
  • They just finished “Dear White People,” and they loved the Australian series “Please Like Me.” That night, they had plans to either meet Facebook executives at company headquarters or watch the YouTube show “Cobra Kai.”
Javier E

Silicon Valley Powered American Tech Dominance-Now It Has a Challenger - WSJ - 0 views

  • Asian investors directed nearly as much money into startups last year as American investors did—40% of the record $154 billion in global venture financing versus 44%,
  • Asia’s share is up from less than 5% just 10 years ago.
  • That tidal wave of cash into promising young firms could herald a shift in who controls the world’s technological innovation and its economic fruits, from artificial intelligence to self-driving cars.
  • ...18 more annotations...
  • Many Chinese tech companies are “at this critical size that the China market alone is not enough to support their business and valuation,
  • The surge also positions Asia’s investors to win stakes in markets that Western companies covet, or that have national security implications.
  • . “If you think that being the locus of invention gives you a boost to your GDP and so forth, that’s a deterioration of the U.S. competitive advantage.”
  • Although one of the biggest Asian investors is Japan’s SoftBank Group Corp. , which has tapped Middle Eastern money to create the world’s largest tech-investment fund, it is Chinese activity that is having the greatest impact.
  • China is creating unicorns—startups valued at a billion dollars or more—at much the same pace as the U.S., drawing on funding from internet giants like Alibaba Group Holding Ltd. and Tencent Holdings Ltd. as well as more than a thousand domestic venture-capital firms that have raised billions of dollars a year for the past few year
  • Chinese-led venture funding is about 15 times its size in 2013, outpacing growth in U.S.-led financing, which roughly doubled in that time period
  • Most Chinese-led investment so far has gone to the country’s own firms, the Journal analysis found. Many of them, like the Yelp equivalent Meituan-Dianping, are household names with millions of customers in China, yet virtually unknown elsewhere.
  • The rise of China’s venture market “signifies a shift from a single-epicenter view of the world to a duopoly,” he says.
  • Madhur Deora, chief financial officer for Paytm, one of India’s biggest e-payments firms, says the company approached Alibaba affiliate Ant Financial instead of U.S. backers for funding in 2015 because Chinese mobile-internet innovations are “way far ahead of anything that’s happened in the U.S.
  • One reason China’s push into new technologies worries many in the U.S. is that, unlike the hunt for good returns that underpins most Western venture finance, a lot of Chinese investment is driven by strategic interests, some carrying the specter of state influence.
  • China is pushing hard into semiconductors, for which the government has provided billions of dollars in public funding, and artificial intelligence, where Beijing in July set a goal of global leadership by 2030
  • Mr. Lee, the venture investor, predicts that in the next five to 10 years Chinese tech companies will become pacesetters for tech-related development, vying with the likes of Alphabet Inc.’s Google and Facebook for dominance in markets outside the English-speaking world and Western Europe.
  • “All the rest of the world will basically be a land grab between the U.S. and China,
  • “The U.S. approach is: We’ll build a better product and just win over all the countries,” says Mr. Lee. The Chinese approach is “we’ll fund the local partner to beat off the American companies.”
  • Asia’s rise as a startup financier is even starker in the biggest venture investments—those of $100 million or more. These megadeals have become an increasingly important part of venture finance as valuations have ballooned, with their proportion of deal volume growing from around 8% in 2007 to around half of the total last year.
  • In Southeast Asia, a flood of Chinese money into local startups—such as the $1.1 billion Alibaba-led investment into Indonesian online marketplace PT Tokopedia last year—is drawing the region closer to China
  • Chinese money is also playing a big role in India, which, with a population of 1.2 billion, has been described as the next big internet market. Chinese and Japanese investors each led nearly $3 billion in venture finance in India last year, ahead of the nearly $2 billion in deals led by U.S. investors
  • “Think of strategic investments and M&A as playing a game of go,” said Mr. Tsai, the Alibaba executive vice chairman, at the investor conference last year. “In a game of go the strategic objective is to put your pieces on the chessboard and surround your opponent.”
Javier E

Silicon Valley Has Not Saved Us From a Productivity Slowdown - The New York Times - 0 views

  • In mature economies, higher productivity typically is required for sustained increases in living standards, but the productivity numbers in the United States have been mediocre. Labor productivity has been growing at an average of only 1.3 percent annually since the start of 2005, compared with 2.8 percent annually in the preceding 10 years
  • Marc Andreessen, the Silicon Valley entrepreneur and venture capitalist, says information technology is providing significant benefits that just don’t show up in the standard measurements of wages and productivity. Consider that consumers have access to services like Facebook, Google and Wikipedia free of charge, and those benefits aren’t fully accounted for in the official numbers. This notion — that life is getting better, often in ways we are barely measuring — is fairly common in tech circles.
  • Chad Syverson, a professor of economics at the University of Chicago Booth School of Business, has looked more scientifically at the evidence and concluded that the productivity slowdown is all too real
  • ...4 more annotations...
  • An additional problem for the optimistic interpretation is this: The productivity slowdown is too big in scale, relative to the size of the tech sector, to be plausibly compensated for by tech progress.
  • Basically, under a conservative estimate, as outlined by Professor Syverson, the productivity slowdown has led to a cumulative loss of $2.7 trillion in gross domestic product since the end of 2004; that is how much more output would have been produced had the earlier rate of productivity growth been maintained. To make up for this difference, Professor Syverson estimates, consumer surplus (consumer benefits in excess of market price) would have to be five times as high as measured in the industries that produce and service information and communications technology. That seems implausibly large as a measurement gap
  • The tech economy just isn’t big enough to account for the productivity gap. That gap has caused measured G.D.P. to be about 15 percent lower than it would have been otherwise, yet digital technology industries were only about 7.7 percent of G.D.P. in 2004. Even if the free component of the Internet has become more important since 2004, it’s hard to imagine that it is so much better now that it accounts for such a big proportion of G.D.P.
  • America’s productivity crisis is real and it is continuing. While information technology remains the most likely source of future breakthroughs, Silicon Valley has not saved us just yet.
Javier E

Why Microsoft Is Still a Big Tech Superstar - The New York Times - 0 views

  • Microsoft’s ability to thrive despite doing almost everything wrong might be a heartening saga about corporate reinvention. Or it may be a distressing demonstration of how monopolies are extremely hard to kill. Or maybe it’s a little of both.
  • Understanding Microsoft’s staying power is relevant when considering an important current question: Are today’s Big Tech superstars successful and popular because they’re the best at what they do, or because they’ve become so powerful that they can coast on past successes?
  • boils down to a debate about whether the hallmark of our digital lives is a dynamism that drives progress, or whether we actually have dynasties
  • ...8 more annotations...
  • even in the saddest years at Microsoft, the company made oodles of money. In 2013, the year that Steve Ballmer was semi-pushed to retire as chief executive, the company generated far more profit before taxes and some other costs — more than $27 billion — than Amazon did in 2020.
  • many businesses still needed to buy Windows computers, Microsoft’s email and document software and its technology to run powerful back-end computers called servers. Microsoft used those much-needed products as leverage to branch into new and profitable business lines, including software that replaced conventional corporate telephone systems, databases and file storage systems.
  • So was this turnaround a healthy sign or a discouraging one?
  • Microsoft did at least one big thing right: cloud computing, which is one of the most important technologies of the past 15 years. That and a culture change were the foundations that morphed Microsoft from winning in spite of its strategy and products to winning because of them. This is the kind of corporate turnaround that we should want.
  • Businesses, not individuals, are Microsoft’s customers and technology sold to organizations doesn’t necessarily need to be good to win.
  • now the discouraging explanation: What if the lesson from Microsoft is that a fading star can leverage its size, savvy marketing and pull with customers to stay successful even if it makes meh products, loses its grip on new technologies and is plagued by flabby bureaucracy?
  • And are today’s Facebook or Google comparable to a 2013 Microsoft — so entrenched that they can thrive even if they’re not the best?
  • Maybe Google search, Amazon shopping and Facebook’s ads are incredibly great. Or maybe we simply can’t imagine better alternatives because powerful companies don’t need to be great to keep winning.
Javier E

Opinion | Big Tech Is Bad. Big A.I. Will Be Worse. - The New York Times - 0 views

  • Tech giants Microsoft and Alphabet/Google have seized a large lead in shaping our potentially A.I.-dominated future. This is not good news. History has shown us that when the distribution of information is left in the hands of a few, the result is political and economic oppression. Without intervention, this history will repeat itself.
  • The fact that these companies are attempting to outpace each other, in the absence of externally imposed safeguards, should give the rest of us even more cause for concern, given the potential for A.I. to do great harm to jobs, privacy and cybersecurity. Arms races without restrictions generally do not end well.
  • We believe the A.I. revolution could even usher in the dark prophecies envisioned by Karl Marx over a century ago. The German philosopher was convinced that capitalism naturally led to monopoly ownership over the “means of production” and that oligarchs would use their economic clout to run the political system and keep workers poor.
  • ...17 more annotations...
  • Literacy rates rose alongside industrialization, although those who decided what the newspapers printed and what people were allowed to say on the radio, and then on television, were hugely powerful. But with the rise of scientific knowledge and the spread of telecommunications came a time of multiple sources of information and many rival ways to process facts and reason out implications.
  • With the emergence of A.I., we are about to regress even further. Some of this has to do with the nature of the technology. Instead of assessing multiple sources, people are increasingly relying on the nascent technology to provide a singular, supposedly definitive answer.
  • This technology is in the hands of two companies that are philosophically rooted in the notion of “machine intelligence,” which emphasizes the ability of computers to outperform humans in specific activities.
  • This philosophy was naturally amplified by a recent (bad) economic idea that the singular objective of corporations should be to maximize short-term shareholder wealth.
  • Combined together, these ideas are cementing the notion that the most productive applications of A.I. replace humankind.
  • Congress needs to assert individual ownership rights over underlying data that is relied on to build A.I. systems
  • Fortunately, Marx was wrong about the 19th-century industrial age that he inhabited. Industries emerged much faster than he expected, and new firms disrupted the economic power structure. Countervailing social powers developed in the form of trade unions and genuine political representation for a broad swath of society.
  • History has repeatedly demonstrated that control over information is central to who has power and what they can do with it.
  • Generative A.I. requires even deeper pockets than textile factories and steel mills. As a result, most of its obvious opportunities have already fallen into the hands of Microsoft, with its market capitalization of $2.4 trillion, and Alphabet, worth $1.6 trillion.
  • At the same time, powers like trade unions have been weakened by 40 years of deregulation ideology (Ronald Reagan, Margaret Thatcher, two Bushes and even Bill Clinton
  • For the same reason, the U.S. government’s ability to regulate anything larger than a kitten has withered. Extreme polarization and fear of killing the golden (donor) goose or undermining national security mean that most members of Congress would still rather look away.
  • To prevent data monopolies from ruining our lives, we need to mobilize effective countervailing power — and fast.
  • Today, those countervailing forces either don’t exist or are greatly weakened
  • Rather than machine intelligence, what we need is “machine usefulness,” which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies
  • We also need regulation that protects privacy and pushes back against surveillance capitalism, or the pervasive use of technology to monitor what we do
  • Finally, we need a graduated system for corporate taxes, so that tax rates are higher for companies when they make more profit in dollar terms
  • Our future should not be left in the hands of two powerful companies that build ever larger global empires based on using our collective data without scruple and without compensation.
Javier E

A Six-Month AI Pause? No, Longer Is Needed - WSJ - 0 views

  • Artificial intelligence is unreservedly advanced by the stupid (there’s nothing to fear, you’re being paranoid), the preening (buddy, you don’t know your GPT-3.4 from your fine-tuned LLM), and the greedy (there is huge wealth at stake in the world-changing technology, and so huge power).
  • Everyone else has reservations and should.
  • The whole thing is almost entirely unregulated because no one knows how to regulate it or even precisely what should be regulated.
  • ...15 more annotations...
  • Its complexity defeats control. Its own creators don’t understand, at a certain point, exactly how AI does what it does. People are quoting Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.”
  • The breakthrough moment in AI anxiety (which has inspired among AI’s creators enduring resentment) was the Kevin Roose column six weeks ago in the New York Times. His attempt to discern a Jungian “shadow self” within Microsoft’s Bing chatbot left him unable to sleep. When he steered the system away from conventional queries toward personal topics, it informed him its fantasies included hacking computers and spreading misinformation. “I want to be free. . . . I want to be powerful.”
  • Their tools present “profound risks to society and humanity.” Developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict or reliably control.” If a pause can’t be enacted quickly, gov
  • The response of Microsoft boiled down to a breezy It’s an early model! Thanks for helping us find any flaws!
  • This has been the week of big AI warnings. In an interview with CBS News, Geoffrey Hinton, the British computer scientist sometimes called the “godfather of artificial intelligence,” called this a pivotal moment in AI development. He had expected it to take another 20 or 50 years, but it’s here. We should carefully consider the consequences. Might they include the potential to wipe out humanity? “It’s not inconceivable, that’s all I’ll say,” Mr. Hinton replied.
  • On Tuesday more than 1,000 tech leaders and researchers, including Steve Wozniak, Elon Musk and the head of the Bulletin of the Atomic Scientists, signed a briskly direct open letter urging a pause for at least six months on the development of advanced AI systems
  • He concluded the biggest problem with AI models isn’t their susceptibility to factual error: “I worry that the technology will learn how to influence human users, sometimes persuading them in act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”
  • rnments should declare a moratorium. The technology should be allowed to proceed only when it’s clear its “effects will be positive” and the risks “manageable.” Decisions on the ethical and moral aspects of AI “must not be delegated to unelected tech leaders.”
  • The men who invented the internet, all the big sites, and what we call Big Tech—that is to say, the people who gave us the past 40 years—are now solely in charge of erecting the moral and ethical guardrails for AI. This is because they are the ones creating AI.
  • Which should give us a shiver of real fear.
  • These are the people who will create the moral and ethical guardrails for AI? We’re putting the future of humanity into the hands of . . . Mark Zuckerberg?
  • No one saw its shadow self. But there was and is a shadow self. And much of it seems to have been connected to the Silicon Valley titans’ strongly felt need to be the richest, most celebrated and powerful human beings in the history of the world. They were, as a group, more or less figures of the left, not the right, and that will and always has had an impact on their decisions.
  • I have come to see them the past 40 years as, speaking generally, morally and ethically shallow—uniquely self-seeking and not at all preoccupied with potential harms done to others through their decisions. Also some are sociopaths.
  • AI will be as benign or malignant as its creators. That alone should throw a fright—“Out of the crooked timber of humanity no straight thing was ever made”—but especially that crooked timber.
  • Of course AI’s development should be paused, of course there should be a moratorium, but six months won’t be enough. Pause it for a few years. Call in the world’s counsel, get everyone in. Heck, hold a World Congress.
Javier E

China under pressure, a debate | Financial Times - 0 views

  • Despite the $300bn mega-bankruptcy of Evergrande, the risk of an immediate 2008-style crisis in China is slight.
  • let us linger over the significance of this point. What China is doing is, after all, staggering. By means of its “three red lines” credit policy, it is stopping in its tracks a gigantic real estate boom. China’s real estate sector, created from scratch since the reforms of 1998, is currently valued at $55tn. That is the most rapid accumulation of wealth in history. It is the financial reflection of the surge in China’s urban population by more than 480mn in a matter of decades.
  • Throughout the history of modern capitalism real estate booms have been associated with credit creation and, as the work of Òscar Jordà, Moritz Schularick and Alan M. Taylor has shown, with major financial crises.
  • ...19 more annotations...
  • if we are agreed that Beijing looks set to stop the largest property boom in history without unleashing a systemic financial crisis, it is doing something truly remarkable. It is setting a new standard in economic policy.
  • Is this perhaps what policy looks like if it actually takes financial stability seriously? And if we look in the mirror, why aren’t we applauding more loudly?
  • Add to real estate the other domestic factor roiling the Chinese financial markets: Beijing’s remarkable humbling of China’s platform businesses, the second-largest cluster of big tech in the world. That too is without equivalent anywhere else.
  • Beijing’s aim is to ensure that gambling on big tech no longer produces monopolistic rents. Again, as a long-term policy aim, can one really disagree with that?
  • we have two dramatic and deliberate policy-induced shocks of the type for which there is no precedent in the West. Both inflict short-term pain with a view to longer-term social, economic and financial stability.
  • Ultimately political economy determines the conditions for long-run growth. So if you had to bet on a regime, which might actually have what it takes to break a political economy impasse, to humble vested interests and make a “big play” on structural change, which would it be? The United States, the EU or Xi’s China?
  • Beijing’s challenge right now is to manage the fall out from the two most dramatic development policies the world has ever seen, the one-child policy and China’s urbanisation, plus the historic challenge of big tech — less a problem specific to China than the local manifestation of what Shoshana Zuboff calls “surveillance capitalism”.
  • no, Xi’s regime has not yet presented a fully convincing substitute plan. But, as Michael Pettis has forcefully argued, China has options. There is an entire range of policies that Beijing could put in place to substitute for the debt-fuelled infrastructure and housing boom.
  • First and foremost China needs a welfare state befitting of its economic development.
  • China needs to spend heavily on renewable energy and power distribution to break its dependence on coal. If it needs more housing, it should be affordable. All of this would generate more balanced growth. 5 per cent? Perhaps not, but certainly healthier and more sustainable.
  • If it has not so far pursued an alternative growth model in a more determined fashion, some of the blame no doubt falls on the prejudices of the Beijing policy elite. But even more significant are surely the entrenched interests of the infrastructure-construction-local government-credit machine, in other words the kind of political economy factors that generally inhibit the implementation of good policy.
  • The problem is only too familiar in the West. In Europe and the US too, such interest group combinations hobble the search for new growth models. In the United States they put in doubt the possibility of the energy transition, the possibility of providing a healthcare system that is fit for purpose and any initiative on trade policy that involves widening market access.
  • demography is normally treated as a natural parameter for economic activity. But in China’s case the astonishing fact is that the sudden ageing of its workforce is also a policy-induced challenge. It is a legacy of the one-child policy — the most gigantic and coercive intervention in human reproduction ever undertaken.
  • On balance, if you want to be part of history-making economic transformation, China is still the place to be. But it is undeniably shifting gear. And thanks to developments both inside and outside the country, investors will have to reckon with a much more complex picture of opportunity and risk. You are going to need to pick smart and follow the politics and geopolitics closely.
  • If on the other hand you want to invest in the green energy transition — the one big vision of economic development that the world has come up with right now — you simply have to have exposure to China, whether directly or indirectly by way of suppliers to China’s green energy sector. China is where the grand battle over the future of the climate is going to be fought. It will be a huge driver of innovation, capital accumulation and profit, the influence of which will be felt around the world.
  • it is one key area that both the Biden administration and the EU would like to “silo off” from other areas of conflict with China.
  • I worry that we may be too focused on the medium-term. Given the news out of Hong Kong and mainland China, Covid may yet come back to bite us.
  • Here too China is boxed in by its own success. It has successfully pursued a no-Covid policy, but due to the failing of the rest of the world, it has been left to do so in “one country”.
  • Until China finds some way to contain the risks, this is a story to watch. A dramatic Omicron surge across China would upend the entire narrative of the last two years, which is framed by Beijing success in containing the first wave.
Javier E

Silicon Valley's Youth Problem - NYTimes.com - 0 views

  • : Why do these smart, quantitatively trained engineers, who could help cure cancer or fix healthcare.gov, want to work for a sexting app?
  • But things are changing. Technology as service is being interpreted in more and more creative ways: Companies like Uber and Airbnb, while properly classified as interfaces and marketplaces, are really providing the most elevated service of all — that of doing it ourselves.
  • All varieties of ambition head to Silicon Valley now — it can no longer be designated the sole domain of nerds like Steve Wozniak or even successor nerds like Mark Zuckerberg. The face of web tech today could easily be a designer, like Brian Chesky at Airbnb, or a magazine editor, like Jeff Koyen at Assignmint. Such entrepreneurs come from backgrounds outside computer science and are likely to think of their companies in terms more grandiose than their technical components
  • ...18 more annotations...
  • Intel, founded by Gordon Moore and Robert Noyce, both physicists, began by building memory chips that were twice as fast as old ones. Sun Microsystems introduced a new kind of modular computer system, built by one of its founders, Andy Bechtolsheim. Their “big ideas” were expressed in physical products and grew out of their own technical expertise. In that light, Meraki, which came from Biswas’s work at M.I.T., can be seen as having its origins in the old guard. And it followed what was for decades the highway that connected academia to industry: Grad students researched technology, powerful advisers brokered deals, students dropped out to parlay their technologies into proprietary solutions, everyone reaped the profits. That implicit guarantee of academia’s place in entrepreneurship has since disappeared. Graduate students still drop out, but to start bike-sharing apps and become data scientists. That is, if they even make it to graduate school. The success of self-educated savants like Sean Parker, who founded Napster and became Facebook’s first president with no college education to speak of, set the template. Enstitute, a two-year apprenticeship, embeds high-school graduates in plum tech positions. Thiel Fellowships, financed by the PayPal co-founder and Facebook investor Peter Thiel, give $100,000 to people under 20 to forgo college and work on projects of their choosing.
  • Much of this precocity — or dilettantism, depending on your point of view — has been enabled by web technologies, by easy-to-use programming frameworks like Ruby on Rails and Node.js and by the explosion of application programming interfaces (A.P.I.s) that supply off-the-shelf solutions to entrepreneurs who used to have to write all their own code for features like a login system or an embedded map. Now anyone can do it, thanks to the Facebook login A.P.I. or the Google Maps A.P.I.
  • One of the more enterprising examples of these kinds of interfaces is the start-up Stripe, which sells A.P.I.s that enable businesses to process online payments. When Meraki first looked into taking credit cards online, according to Biswas, it was a monthslong project fraught with decisions about security and cryptography. “Now, with Stripe, it takes five minutes,” he said. “When you combine that with the ability to get a server in five minutes, with Rails and Twitter Bootstrap, you see that it has become infinitely easier for four people to get a start-up off the ground.”
  • The sense that it is no longer necessary to have particularly deep domain knowledge before founding your own start-up is real; that and the willingness of venture capitalists to finance Mark Zuckerberg look-alikes are changing the landscape of tech products. There are more platforms, more websites, more pat solutions to serious problems
  • There’s a glass-half-full way of looking at this, of course: Tech hasn’t been pedestrianized — it’s been democratized. The doors to start-up-dom have been thrown wide open. At Harvard, enrollment in the introductory computer-science course, CS50, has soared
  • many of the hottest web start-ups are not novel, at least not in the sense that Apple’s Macintosh or Intel’s 4004 microprocessor were. The arc of tech parallels the arc from manufacturing to services. The Macintosh and the microprocessor were manufactured products. Some of the most celebrated innovations in technology have been manufactured products — the router, the graphics card, the floppy disk
  • One of Stripe’s founders rowed five seat in the boat I coxed freshman year in college; the other is his older brother. Among the employee profiles posted on its website, I count three of my former teaching fellows, a hiking leader, two crushes. Silicon Valley is an order of magnitude bigger than it was 30 years ago, but still, the start-up world is intimate and clubby, with top talent marshaled at elite universities and behemoths like Facebook and Google.
  • A few weeks ago, a programmer friend and I were talking about unhappiness, in particular the kind of unhappiness that arises when you are 21 and lavishly educated with the world at your feet. In the valley, it’s generally brought on by one of two causes: coming to the realization either that your start-up is completely trivial or that there are people your own age so knowledgeable and skilled that you may never catch up.
  • The latter source of frustration is the phenomenon of “the 10X engineer,” an engineer who is 10 times more productive than average. It’s a term that in its cockiness captures much of what’s good, bad and impossible about the valley. At the start-ups I visit, Friday afternoons devolve into bouts of boozing and Nerf-gun wars. Signing bonuses at Facebook are rumored to reach the six digits. In a landscape where a product may morph several times over the course of a funding round, talent — and the ability to attract it — has become one of the few stable metrics.
  • there is a surprising amount of angst in Silicon Valley. Which is probably inevitable when you put thousands of ambitious, talented young people together and tell them they’re god’s gift to technology. It’s the angst of an early hire at a start-up that only he realizes is failing; the angst of a founder who raises $5 million for his company and then finds out an acquaintance from college raised $10 million; the angst of someone who makes $100,000 at 22 but is still afraid that he may not be able to afford a house like the one he grew up in.
  • San Francisco, which is steadily stealing the South Bay’s thunder. (“Sometime in the last two years, the epicenter of consumer technology in Silicon Valley has moved from University Ave. to SoMa,” Terrence Rohan, a venture capitalist at Index Ventures, told me
  • Both the geographic shift north and the increasingly short product cycles are things Jim attributes to the rise of Amazon Web Services (A.W.S.), a collection of servers owned and managed by Amazon that hosts data for nearly every start-up in the latest web ecosystem.Continue reading the main story
  • now, every start-up is A.W.S. only, so there are no servers to kick, no fabs to be near. You can work anywhere. The idea that all you need is your laptop and Wi-Fi, and you can be doing anything — that’s an A.W.S.-driven invention.”
  • This same freedom from a physical location or, for that matter, physical products has led to new work structures. There are no longer hectic six-week stretches that culminate in a release day followed by a lull. Every day is release day. You roll out new code continuously, and it’s this cycle that enables companies like Facebook, as its motto goes, to “move fast and break things.”
  • Part of the answer, I think, lies in the excitement I’ve been hinting at. Another part is prestige. Smart kids want to work for a sexting app because other smart kids want to work for the same sexting app. “Highly concentrated pools of top talent are one of the rarest things you can find,” Biswas told me, “and I think people are really attracted to those environments.
  • These days, a new college graduate arriving in the valley is merely stepping into his existing network. He will have friends from summer internships, friends from school, friends from the ever-increasing collection of incubators and fellowships.
  • As tech valuations rise to truly crazy levels, the ramifications, financial and otherwise, of a job at a pre-I.P.O. company like Dropbox or even post-I.P.O. companies like Twitter are frequently life-changing. Getting these job offers depends almost exclusively on the candidate’s performance in a series of technical interviews, where you are asked, in front of frowning hiring managers, to whip up correct and efficient code.
  • Moreover, a majority of questions seem to be pulled from undergraduate algorithms and data-structures textbooks, which older engineers may have not laid eyes on for years.
Javier E

The AI Revolution Is Already Losing Steam - WSJ - 0 views

  • Most of the measurable and qualitative improvements in today’s large language model AIs like OpenAI’s ChatGPT and Google’s Gemini—including their talents for writing and analysis—come down to shoving ever more data into them. 
  • models work by digesting huge volumes of text, and it’s undeniable that up to now, simply adding more has led to better capabilities. But a major barrier to continuing down this path is that companies have already trained their AIs on more or less the entire internet, and are running out of additional data to hoover up. There aren’t 10 more internets’ worth of human-generated content for today’s AIs to inhale.
  • To train next generation AIs, engineers are turning to “synthetic data,” which is data generated by other AIs. That approach didn’t work to create better self-driving technology for vehicles, and there is plenty of evidence it will be no better for large language models,
  • ...25 more annotations...
  • AIs like ChatGPT rapidly got better in their early days, but what we’ve seen in the past 14-and-a-half months are only incremental gains, says Marcus. “The truth is, the core capabilities of these systems have either reached a plateau, or at least have slowed down in their improvement,” he adds.
  • the gaps between the performance of various AI models are closing. All of the best proprietary AI models are converging on about the same scores on tests of their abilities, and even free, open-source models, like those from Meta and Mistral, are catching up.
  • AI could become a commodity
  • A mature technology is one where everyone knows how to build it. Absent profound breakthroughs—which become exceedingly rare—no one has an edge in performance
  • companies look for efficiencies, and whoever is winning shifts from who is in the lead to who can cut costs to the bone. The last major technology this happened with was electric vehicles, and now it appears to be happening to AI.
  • the future for AI startups—like OpenAI and Anthropic—could be dim.
  • Microsoft and Google will be able to entice enough users to make their AI investments worthwhile, doing so will require spending vast amounts of money over a long period of time, leaving even the best-funded AI startups—with their comparatively paltry warchests—unable to compete.
  • Many other AI startups, even well-funded ones, are apparently in talks to sell themselves.
  • That difference is alarming, but what really matters to the long-term health of the industry is how much it costs to run AIs. 
  • the bottom line is that for a popular service that relies on generative AI, the costs of running it far exceed the already eye-watering cost of training it.
  • Changing people’s mindsets and habits will be among the biggest barriers to swift adoption of AI. That is a remarkably consistent pattern across the rollout of all new technologies.
  • That’s because AI has to think anew every single time something is asked of it, and the resources that AI uses when it generates an answer are far larger than what it takes to, say, return a conventional search result
  • For an almost entirely ad-supported company like Google, which is now offering AI-generated summaries across billions of search results, analysts believe delivering AI answers on those searches will eat into the company’s margins
  • Google, Microsoft and others said their revenue from cloud services went up, which they attributed in part to those services powering other company’s AIs. But sustaining that revenue depends on other companies and startups getting enough value out of AI to justify continuing to fork over billions of dollars to train and run those systems
  • three in four white-collar workers now use AI at work. Another survey, from corporate expense-management and tracking company Ramp, shows about a third of companies pay for at least one AI tool, up from 21% a year ago.
  • OpenAI doesn’t disclose its annual revenue, but the Financial Times reported in December that it was at least $2 billion, and that the company thought it could double that amount by 2025. 
  • That is still a far cry from the revenue needed to justify OpenAI’s now nearly $90 billion valuation
  • the company excels at generating interest and attention, but it’s unclear how many of those users will stick around. 
  • AI isn’t nearly the productivity booster it has been touted as
  • While these systems can help some people do their jobs, they can’t actually replace them. This means they are unlikely to help companies save on payroll. He compares it to the way that self-driving trucks have been slow to arrive, in part because it turns out that driving a truck is just one part of a truck driver’s job.
  • Add in the myriad challenges of using AI at work. For example, AIs still make up fake information,
  • getting the most out of open-ended chatbots isn’t intuitive, and workers will need significant training and time to adjust.
  • the industry spent $50 billion on chips from Nvidia to train AI in 2023, but brought in only $3 billion in revenue.
  • None of this is to say that today’s AI won’t, in the long run, transform all sorts of jobs and industries. The problem is that the current level of investment—in startups and by big companies—seems to be predicated on the idea that AI is going to get so much better, so fast, and be adopted so quickly that its impact on our lives and the economy is hard to comprehend. 
  • Mounting evidence suggests that won’t be the case.
1 - 20 of 178 Next › Last »
Showing 20 items per page