Skip to main content

Home/ History Readings/ Group items tagged high-tech

Rss Feed Group items tagged

carolinehayter

Google Lawsuit Marks End Of Washington's Love Affair With Big Tech : NPR - 0 views

  • The U.S. Justice Department and 11 state attorneys general have filed a blockbuster lawsuit against Google, accusing it of being an illegal monopoly because of its stranglehold on Internet search.
  • The government alleged Google has come by its wild success — 80% market share in U.S. search, a valuation eclipsing $1 trillion — unfairly. It said multibillion-dollar deals Google has struck to be the default search engine in many of the world's Web browsers and smartphones have boxed out its rivals.
  • Google's head of global affairs, Kent Walker, said the government's case is "deeply flawed." The company warned that if the Justice Department prevails, people would pay more for their phones and have worse options for searching the Internet.
  • ...19 more annotations...
  • Just look at the word "Google," the lawsuit said — it's become "a verb that means to search the internet." What company can compete with that?
  • "It's been a relationship of extremes,"
  • a tectonic shift is happening right now: USA v. Google is the biggest manifestation of what has become known as the "Techlash" — a newfound skepticism of Silicon Valley's giants and growing appetite to rein them in through regulation.
  • "It's the end of hands-off of the tech sector," said Gene Kimmelman, a former senior antitrust official at the Justice Department. "It's probably the beginning of a decade of a series of lawsuits against companies like Google who dominate in the digital marketplace."
  • For years, under both Republican and Democratic administrations, Silicon Valley's tech stars have thrived with little regulatory scrutin
  • There is similar skepticism in Washington of Facebook, Amazon and Apple — the companies that, with Google, have become known as Big Tech, an echo of the corporate villains of earlier eras such as Big Oil and Big Tobacco.
  • All four tech giants have been under investigation by regulators, state attorneys general and Congress — a sharp shift from just a few years ago when many politicians cozied up to the cool kids of Silicon Valley.
  • Tech companies spend millions of dollars lobbying lawmakers, and many high-level government officials have left politics to work in tech,
  • It will likely be years before this fight is resolved.
  • She said Washington's laissez-faire attitude toward tech is at least partly responsible for the sector's expansion into nearly every aspect of our lives.
  • "These companies were allowed to grow large, in part because they had political champions on both sides of the aisle that really supported what they were doing and viewed a lot of what they were doing uncritically. And then ... these companies became so big and so powerful and so good at what they set out to do, it became something of a runaway train," she said.
  • The Google lawsuit is the most concrete action in the U.S. to date challenging the power of Big Tech. While the government stopped short of explicitly calling for a breakup, U.S. Associate Deputy Attorney General Ryan Shores said that "nothing's off the table."
  • "This case signals that the antitrust winter is over,"
  • other branches of government are also considering ways to bring these companies to heel. House Democrats released a sweeping report this month calling for new rules to strip Apple, Amazon, Facebook and Google of the power that has made each of them dominant in their fields. Their recommendations ranged from forced "structural separations" to reforming American antitrust law. Republicans, meanwhile, have channeled much of their ire into allegations that platforms such as Facebook and Twitter are biased against conservatives — a claim for which there is no conclusive evidence.
  • Congressional Republicans and the Trump administration are using those bias claims to push for an overhaul of Section 230 of the 1996 Communications Decency Act, a longstanding legal shield that protects online platforms from being sued over what people post on them and says they can't be punished for reasonable moderation of those posts.
  • The CEOs of Google, Facebook and Twitter are set to appear next week before the Senate Commerce Committee at a hearing about Section 230.
  • On the same day the Justice Department sued Google, two House Democrats, Anna Eshoo, whose California district includes large parts of Silicon Valley, and Tom Malinowski of New Jersey, introduced their own bill taking aim at Section 230. It would hold tech companies liable if their algorithms amplify or recommend "harmful, radicalizing content that leads to offline violence."
  • That means whichever party wins control of the White House and Congress in November, Big Tech should not expect the temperature in Washington to warm up.
  • Editor's note: Google, Facebook, Apple and Amazon are among NPR's financial supporters.
ethanshilling

San Francisco's Tech Workers Make the Big Move - The New York Times - 0 views

  • Rent was astronomical. Taxes were high. Your neighbors didn’t like you. If you lived in San Francisco, you might have commuted an hour south to your job at Apple or Google or Facebook.
  • Remote work offered a chance at residing for a few months in towns where life felt easier. Tech workers and their bosses realized they might not need all the perks and after-work schmooze events.
  • That’s where the story of the Bay Area’s latest tech era is ending for a growing crowd of tech workers and their companies. They have suddenly movable jobs and money in the bank — money that will go plenty further somewhere else.
  • ...9 more annotations...
  • The No. 1 pick for people leaving San Francisco is Austin, Texas, with other winners including Seattle, New York and Chicago, according to moveBuddha, a site that compiles data on moving.
  • The biggest tech companies aren’t going anywhere, and tech stocks are still soaring. Apple’s flying-saucer-shaped campus is not going to zoom away. Google is still absorbing ever more office space in San Jose and San Francisco. New founders are still coming to town.
  • But the migration from the Bay Area appears real. Residential rents in San Francisco are down 27 percent from a year ago, and the office vacancy rate has spiked to 16.7 percent, a number not seen in a decade.
  • Pinterest, which has one of the most iconic offices in town, paid $90 million to break a lease for a site where it planned to expand. And companies like Twitter and Facebook have announced “work from home forever” plans.
  • Now the local tech industry is rapidly expanding. Apple is opening a $1 billion, 133-acre campus. Alphabet, Amazon and Facebook have all either expanded their footprints in Austin or have plans to. Elon Musk, the Tesla founder and one of the two richest men in the world, said he had moved to Texas. Start-up investor money is arriving, too: The investors at 8VC and Breyer Capital opened Austin offices last year.
  • The San Francisco exodus means the talent and money of newly remote tech workers are up for grabs. And it’s not just the mayor of Miami trying to lure them in.
  • There are 33,000 members in the Facebook group Leaving California and 51,000 in its sister group, Life After California. People post pictures of moving trucks and links to Zillow listings in new cities.
  • If San Francisco of the 2010s proved anything, it’s the power of proximity. Entrepreneurs could find a dozen start-up pitch competitions every week within walking distance. If they left a big tech company, there were start-ups eager to hire, and if a start-up failed, there was always another.
  • No one leaving the city is arguing that a culture of innovation is going to spring up over Zoom. So some are trying to recreate it. They are getting into property development, building luxury tiny-home compounds and taking over big, funky houses in old resort towns.
Javier E

How Nations Are Losing a Global Race to Tackle A.I.'s Harms - The New York Times - 0 views

  • When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology.
  • E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc.
  • Then came ChatGPT.
  • ...45 more annotations...
  • The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage.
  • Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law.
  • Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence.
  • Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works.
  • The absence of rules has left a vacuum. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced A.I. systems
  • At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace
  • That gap has been compounded by an A.I. knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits.
  • Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers.
  • The European Union has plowed ahead with its new law, the A.I. Act, despite disputes over how to handle the makers of the latest A.I. systems.
  • The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research.
  • A final agreement, expected as soon as Wednesday, could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work. But even if it passes, it is not expected to take effect for at least 18 months — a lifetime in A.I. development — and how it will be enforced is unclear.
  • Many companies, preferring nonbinding codes of conduct that provide latitude to speed up development, are lobbying to soften proposed regulations and pitting governments against one another.
  • “No one, not even the creators of these systems, know what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of Britain, who presided over an A.I. Safety Summit last month with 28 countries. “The urgency comes from there being a real question of whether governments are equipped to deal with and mitigate the risks.”
  • Europe takes the lead
  • In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. E.U. officials had selected them to provide advice about the technology, which was drawing attention for powering driverless cars and facial recognition systems.
  • as they discussed A.I.’s possible effects — including the threat of facial recognition technology to people’s privacy — they recognized “there were all these legal gaps, and what happens if people don’t follow those guidelines?”
  • In 2019, the group published a 52-page report with 33 recommendations, including more oversight of A.I. tools that could harm individuals and society.
  • By October, the governments of France, Germany and Italy, the three largest E.U. economies, had come out against strict regulation of general purpose A.I. models for fear of hindering their domestic tech start-ups. Others in the European Parliament said the law would be toothless without addressing the technology. Divisions over the use of facial recognition technology also persisted.
  • So when the A.I. Act was unveiled in 2021, it concentrated on “high risk” uses of the technology, including in law enforcement, school admissions and hiring. It largely avoided regulating the A.I. models that powered them unless listed as dangerous
  • “They sent me a draft, and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who advised the European Commission. “Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.”
  • E.U. leaders were undeterred.“Europe may not have been the leader in the last wave of digitalization, but it has it all to lead the next one,” Ms. Vestager said when she introduced the policy at a news conference in Brussels.
  • In 2020, European policymakers decided that the best approach was to focus on how A.I. was used and not the underlying technology. A.I. was not inherently good or bad, they said — it depended on how it was applied.
  • Nineteen months later, ChatGPT arrived.
  • The Washington game
  • Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.
  • “We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, OpenAI’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”
  • Tech companies have seized their advantage. In the first half of the year, many of Microsoft’s and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss A.I. legislation, according to lobbying disclosures. OpenAI registered its first three lobbyists and a tech lobbying group unveiled a $25 million campaign to promote A.I.’s benefits this year.
  • In that same period, Mr. Altman met with more than 100 members of Congress, including former Speaker Kevin McCarthy, Republican of California, and the Senate leader, Chuck Schumer, Democrat of New York. After testifying in Congress in May, Mr. Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Mr. Sunak and Prime Minister Narendra Modi of India.
  • , the White House announced that the four companies had agreed to voluntary commitments on A.I. safety, including testing their systems through third-party overseers — which most of the companies were already doing.
  • “It was brilliant,” Mr. Smith said. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do and we’ll push you to do more.’”
  • In a statement, Ms. Raimondo said the federal government would keep working with companies so “America continues to lead the world in responsible A.I. innovation.”
  • Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued welcoming tech executives.
  • In September, Mr. Schumer was the host of Elon Musk, Mark Zuckerberg of Meta, Sundar Pichai of Google, Satya Nadella of Microsoft and Mr. Altman at a closed-door meeting with lawmakers in Washington to discuss A.I. rules. Mr. Musk warned of A.I.’s “civilizational” risks, while Mr. Altman proclaimed that A.I. could solve global problems such as poverty.
  • A.I. companies are playing governments off one another. In Europe, industry groups have warned that regulations could put the European Union behind the United States. In Washington, tech companies have cautioned that China might pull ahead.
  • In May, Ms. Vestager, Ms. Raimondo and Antony J. Blinken, the U.S. secretary of state, met in Lulea, Sweden, to discuss cooperating on digital policy.
  • “China is way better at this stuff than you imagine,” Mr. Clark of Anthropic told members of Congress in January.
  • After two days of talks, Ms. Vestager announced that Europe and the United States would release a shared code of conduct for safeguarding A.I. “within weeks.” She messaged colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we can’t afford to lose.”
  • Months later, no shared code of conduct had appeared. The United States instead announced A.I. guidelines of its own.
  • Little progress has been made internationally on A.I. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for the borderless technology.
  • Yet “weak regulation in another country will affect you,” said Rajeev Chandrasekhar, India’s technology minister, noting that a lack of rules around American social media companies led to a wave of global disinformation.
  • “Most of the countries impacted by those technologies were never at the table when policies were set,” he said. “A.I will be several factors more difficult to manage.”
  • Even among allies, the issue has been divisive. At the meeting in Sweden between E.U. and U.S. officials, Mr. Blinken criticized Europe for moving forward with A.I. regulations that could harm American companies, one attendee said. Thierry Breton, a European commissioner, shot back that the United States could not dictate European policy, the person said.
  • Some policymakers said they hoped for progress at an A.I. safety summit that Britain held last month at Bletchley Park, where the mathematician Alan Turing helped crack the Enigma code used by the Nazis. The gathering featured Vice President Kamala Harris; Wu Zhaohui, China’s vice minister of science and technology; Mr. Musk; and others.
  • The upshot was a 12-paragraph statement describing A.I.’s “transformative” potential and “catastrophic” risk of misuse. Attendees agreed to meet again next year.
  • The talks, in the end, produced a deal to keep talking.
Javier E

At SXSW, a Shift From Apps to a Tech Lifestyle - The New York Times - 0 views

  • the tech ethos has escaped the bounds of hardware and software. Tech is turning into a culture and a style, one that has spread into new foods and clothing, and all other kinds of nonelectronic goods. Tech has become a lifestyle brand.
  • Because it draws a critical mass of tech-conversant people to a small space, SXSW has also made a reputation as a catalyst for new social networking ideas.
  • there is a sense of ennui in the world of tech conferences. What is the purpose of a conference in an age of instant online collaboration?
  • ...6 more annotations...
  • One answer might be to display a new kind of tech brand: physical products that aren’t so much dominated by new technology, but instead informed by the theories and practices that have ruled the tech business.
  • “In a lot of ways apps seem played out,”
  • hey say they have applied an engineering mind-set to creating ingestible items. Traditional coffee is an inconsistent product, they argue — each cup may have significantly more or less caffeine than the last — and it can have undesirable side effects, like jitteriness.
  • Go Cubes, which the pair developed after a long prototyping process involving many different ingredients, are meant to address these shortcomings. The cubes are more portable than coffee, they offer a precise measure of caffeine, and because they include some ingredients meant to modulate caffeine’s sharpest effects, they produce a more focused high.
  • Ministry of Supply, an apparel company started by entrepreneurs who were unsatisfied with business clothing that couldn’t take the punishment that we ladle on athletic clothes, uses engineering techniques to create its products.
  • “My broader theory is that as the world shifts from TV, movies, magazines and newspapers to the Internet, one of the secondary effects of that is that cultural influence shifts from places like New York and L.A. to the Bay Area,”
Javier E

Silicon Valley Is Growing Up, Giving Parents a Break - The New York Times - 0 views

  • Long hours in the office and the expectations of being connected at home are familiar to workers across industries, not just Silicon Valley. Fifty-six percent of parents in dual-income households across the wage spectrum say they find the work-family balance to be difficult and stressful. But tech takes the high-stress, high-stakes American work culture to the extreme.
  • “The tech industry’s love for scrappy, accessible founders adds to the pressure,” said Glenn Kelman, chief executive of Redfin, the online real estate company. “You’re expected to lead by example, to roll up your sleeves, to know everything going on.”
  • “Being a tech founder is all-consuming; you can never really turn off,” said Clara Shih, founder and chief executive of Hearsay Social, who recently had her first child with her husband, Daniel Chao, also a tech founder and chief executive, of Halo Neuroscience. “You can’t skimp on your family, and you can’t skimp on your start-up, so you end up skimping on yourself.”
  • ...3 more annotations...
  • One reason this has recently become an issue could be that Silicon Valley is aging. There are, of course, many established companies with older employees, but many people who work at the hot companies of the web era are now also becoming parents. And start-ups stay private for longer periods now, meaning employees work at them longer before cashing out.
  • Tech companies also employ a disproportionately small number of women — one-third of employees at many companies, and often less than one-fifth of technical employees. Over all, parenthood affects women’s careers more substantially than men’s, and women tend to be the ones who ask for family-friendly policies at work like paid leave or flex time.
  • One symbol of the cultural change in tech, fair or not, is the criticism of executives who seem to prioritize work over family. That happened to Marissa Mayer, chief executive of Yahoo, when she announced she would take only a very short leave after having twins.
Javier E

Tech Is Splitting the U.S. Work Force in Two - The New York Times - 0 views

  • Phoenix cannot escape the uncomfortable pattern taking shape across the American economy: Despite all its shiny new high-tech businesses, the vast majority of new jobs are in workaday service industries, like health care, hospitality, retail and building services, where pay is mediocre.
  • automation is changing the nature of work, flushing workers without a college degree out of productive industries, like manufacturing and high-tech services, and into tasks with meager wages and no prospect for advancement.
  • Automation is splitting the American labor force into two worlds. There is a small island of highly educated professionals making good wages at corporations like Intel or Boeing, which reap hundreds of thousands of dollars in profit per employee. That island sits in the middle of a sea of less educated workers who are stuck at businesses like hotels, restaurants and nursing homes that generate much smaller profits per employee and stay viable primarily by keeping wages low.
  • ...24 more annotations...
  • economists are reassessing their belief that technological progress lifts all boats, and are beginning to worry about the new configuration of work.
  • “We automate the pieces that can be automated,” said Paul Hart, a senior vice president running the radio-frequency power business at NXP’s plant in Chandler. “The work force grows but we need A.I. and automation to increase the throughput.”
  • “The view that we should not worry about any of these things and follow technology to wherever it will go is insane,”
  • But the industry doesn’t generate that many jobs
  • Because it pushes workers to the less productive parts of the economy, automation also helps explain one of the economy’s thorniest paradoxes: Despite the spread of information technology, robots and artificial intelligence breakthroughs, overall productivity growth remains sluggish.
  • Employment in the 58 industries with the lowest productivity, where it tops out at $65,000 per worker, grew 10 times as much over the period, to 673,000.
  • The same is true across the high-tech landscape. Aircraft manufacturing employed 4,234 people in 2017, compared to 4,028 in 2010. Computer systems design services employed 11,000 people in 2017, up from 7,000 in 2010.
  • To find the bulk of jobs in Phoenix, you have to look on the other side of the economy: where productivity is low. Building services, like janitors and gardeners, employed nearly 35,000 people in the area in 2017, and health care and social services accounted for 254,000 workers. Restaurants and other eateries employed 136,000 workers, 24,000 more than at the trough of the recession in 2010. They made less than $450 a week.
  • While Banner invests heavily in technology, the machines do not generally reduce demand for workers. “There are not huge opportunities to increase productivity, but technology has a significant impact on quality,” said Banner’s chief operating officer, Becky Kuhn
  • The 58 most productive industries in Phoenix — where productivity ranges from $210,000 to $30 million per worker, according to Mr. Muro’s and Mr. Whiton’s analysis — employed only 162,000 people in 2017, 14,000 more than in 2010
  • Axon, which makes the Taser as well as body cameras used by police forces, is also automating whatever it can. Today, robots make four times as many Taser cartridges as 80 workers once did less than 10 years ago
  • The same is true across the national economy. Jobs grow in health care, social assistance, accommodation, food services, building administration and waste services
  • On the other end of the spectrum, the employment footprint of highly productive industries, like finance, manufacturing, information services and wholesale trade, has shrunk over the last 30 years
  • “In the standard economic canon, the proposition that you can increase productivity and harm labor is bunkum,” Mr. Acemoglu said
  • By reducing prices and improving quality, technology was expected to raise demand, which would require more jobs. What’s more, economists thought, more productive workers would have higher incomes. This would create demand for new, unheard-of things that somebody would have to make
  • To prove their case, economists pointed confidently to one of the greatest technological leaps of the last few hundred years, when the rural economy gave way to the industrial era.
  • In 1900, agriculture employed 12 million Americans. By 2014, tractors, combines and other equipment had flushed 10 million people out of the sector. But as farm labor declined, the industrial economy added jobs even faster. What happened? As the new farm machines boosted food production and made produce cheaper, demand for agricultural products grew. And farmers used their higher incomes to purchase newfangled industrial goods.
  • The new industries were highly productive and also subject to furious technological advancement. Weavers lost their jobs to automated looms; secretaries lost their jobs to Microsoft Windows. But each new spin of the technological wheel, from plastic toys to televisions to computers, yielded higher incomes for workers and more sophisticated products and services for them to buy.
  • In a new study, David Autor of the Massachusetts Institute of Technology and Anna Salomons of Utrecht University found that over the last 40 years, jobs have fallen in every single industry that introduced technologies to enhance productivity.
  • The only reason employment didn’t fall across the entire economy is that other industries, with less productivity growth, picked up the slack. “The challenge is not the quantity of jobs,” they wrote. “The challenge is the quality of jobs available to low- and medium-skill workers.”
  • the economy today resembles what would have happened if farmers had spent their extra income from the use of tractors and combines on domestic servants. Productivity in domestic work doesn’t grow quickly. As more and more workers were bumped out of agriculture into servitude, productivity growth across the economy would have stagnated.
  • The growing awareness of robots’ impact on the working class raises anew a very old question: Could automation go too far? Mr. Acemoglu and Pascual Restrepo of Boston University argue that businesses are not even reaping large rewards for the money they are spending to replace their workers with machines.
  • the cost of automation to workers and society could be substantial. “It may well be that,” Mr. Summers said, “some categories of labor will not be able to earn a subsistence income.” And this could exacerbate social ills, from workers dropping out of jobs and getting hooked on painkillers, to mass incarceration and families falling apart.
  • Silicon Valley’s dream of an economy without workers may be implausible. But an economy where most people toil exclusively in the lowliest of jobs might be little better.
Javier E

Silicon Valley's Youth Problem - NYTimes.com - 0 views

  • : Why do these smart, quantitatively trained engineers, who could help cure cancer or fix healthcare.gov, want to work for a sexting app?
  • But things are changing. Technology as service is being interpreted in more and more creative ways: Companies like Uber and Airbnb, while properly classified as interfaces and marketplaces, are really providing the most elevated service of all — that of doing it ourselves.
  • All varieties of ambition head to Silicon Valley now — it can no longer be designated the sole domain of nerds like Steve Wozniak or even successor nerds like Mark Zuckerberg. The face of web tech today could easily be a designer, like Brian Chesky at Airbnb, or a magazine editor, like Jeff Koyen at Assignmint. Such entrepreneurs come from backgrounds outside computer science and are likely to think of their companies in terms more grandiose than their technical components
  • ...18 more annotations...
  • Intel, founded by Gordon Moore and Robert Noyce, both physicists, began by building memory chips that were twice as fast as old ones. Sun Microsystems introduced a new kind of modular computer system, built by one of its founders, Andy Bechtolsheim. Their “big ideas” were expressed in physical products and grew out of their own technical expertise. In that light, Meraki, which came from Biswas’s work at M.I.T., can be seen as having its origins in the old guard. And it followed what was for decades the highway that connected academia to industry: Grad students researched technology, powerful advisers brokered deals, students dropped out to parlay their technologies into proprietary solutions, everyone reaped the profits. That implicit guarantee of academia’s place in entrepreneurship has since disappeared. Graduate students still drop out, but to start bike-sharing apps and become data scientists. That is, if they even make it to graduate school. The success of self-educated savants like Sean Parker, who founded Napster and became Facebook’s first president with no college education to speak of, set the template. Enstitute, a two-year apprenticeship, embeds high-school graduates in plum tech positions. Thiel Fellowships, financed by the PayPal co-founder and Facebook investor Peter Thiel, give $100,000 to people under 20 to forgo college and work on projects of their choosing.
  • Much of this precocity — or dilettantism, depending on your point of view — has been enabled by web technologies, by easy-to-use programming frameworks like Ruby on Rails and Node.js and by the explosion of application programming interfaces (A.P.I.s) that supply off-the-shelf solutions to entrepreneurs who used to have to write all their own code for features like a login system or an embedded map. Now anyone can do it, thanks to the Facebook login A.P.I. or the Google Maps A.P.I.
  • One of the more enterprising examples of these kinds of interfaces is the start-up Stripe, which sells A.P.I.s that enable businesses to process online payments. When Meraki first looked into taking credit cards online, according to Biswas, it was a monthslong project fraught with decisions about security and cryptography. “Now, with Stripe, it takes five minutes,” he said. “When you combine that with the ability to get a server in five minutes, with Rails and Twitter Bootstrap, you see that it has become infinitely easier for four people to get a start-up off the ground.”
  • The sense that it is no longer necessary to have particularly deep domain knowledge before founding your own start-up is real; that and the willingness of venture capitalists to finance Mark Zuckerberg look-alikes are changing the landscape of tech products. There are more platforms, more websites, more pat solutions to serious problems
  • There’s a glass-half-full way of looking at this, of course: Tech hasn’t been pedestrianized — it’s been democratized. The doors to start-up-dom have been thrown wide open. At Harvard, enrollment in the introductory computer-science course, CS50, has soared
  • many of the hottest web start-ups are not novel, at least not in the sense that Apple’s Macintosh or Intel’s 4004 microprocessor were. The arc of tech parallels the arc from manufacturing to services. The Macintosh and the microprocessor were manufactured products. Some of the most celebrated innovations in technology have been manufactured products — the router, the graphics card, the floppy disk
  • One of Stripe’s founders rowed five seat in the boat I coxed freshman year in college; the other is his older brother. Among the employee profiles posted on its website, I count three of my former teaching fellows, a hiking leader, two crushes. Silicon Valley is an order of magnitude bigger than it was 30 years ago, but still, the start-up world is intimate and clubby, with top talent marshaled at elite universities and behemoths like Facebook and Google.
  • Part of the answer, I think, lies in the excitement I’ve been hinting at. Another part is prestige. Smart kids want to work for a sexting app because other smart kids want to work for the same sexting app. “Highly concentrated pools of top talent are one of the rarest things you can find,” Biswas told me, “and I think people are really attracted to those environments.
  • The latter source of frustration is the phenomenon of “the 10X engineer,” an engineer who is 10 times more productive than average. It’s a term that in its cockiness captures much of what’s good, bad and impossible about the valley. At the start-ups I visit, Friday afternoons devolve into bouts of boozing and Nerf-gun wars. Signing bonuses at Facebook are rumored to reach the six digits. In a landscape where a product may morph several times over the course of a funding round, talent — and the ability to attract it — has become one of the few stable metrics.
  • there is a surprising amount of angst in Silicon Valley. Which is probably inevitable when you put thousands of ambitious, talented young people together and tell them they’re god’s gift to technology. It’s the angst of an early hire at a start-up that only he realizes is failing; the angst of a founder who raises $5 million for his company and then finds out an acquaintance from college raised $10 million; the angst of someone who makes $100,000 at 22 but is still afraid that he may not be able to afford a house like the one he grew up in.
  • San Francisco, which is steadily stealing the South Bay’s thunder. (“Sometime in the last two years, the epicenter of consumer technology in Silicon Valley has moved from University Ave. to SoMa,” Terrence Rohan, a venture capitalist at Index Ventures, told me
  • Both the geographic shift north and the increasingly short product cycles are things Jim attributes to the rise of Amazon Web Services (A.W.S.), a collection of servers owned and managed by Amazon that hosts data for nearly every start-up in the latest web ecosystem.Continue reading the main story
  • now, every start-up is A.W.S. only, so there are no servers to kick, no fabs to be near. You can work anywhere. The idea that all you need is your laptop and Wi-Fi, and you can be doing anything — that’s an A.W.S.-driven invention.”
  • This same freedom from a physical location or, for that matter, physical products has led to new work structures. There are no longer hectic six-week stretches that culminate in a release day followed by a lull. Every day is release day. You roll out new code continuously, and it’s this cycle that enables companies like Facebook, as its motto goes, to “move fast and break things.”
  • A few weeks ago, a programmer friend and I were talking about unhappiness, in particular the kind of unhappiness that arises when you are 21 and lavishly educated with the world at your feet. In the valley, it’s generally brought on by one of two causes: coming to the realization either that your start-up is completely trivial or that there are people your own age so knowledgeable and skilled that you may never catch up.
  • These days, a new college graduate arriving in the valley is merely stepping into his existing network. He will have friends from summer internships, friends from school, friends from the ever-increasing collection of incubators and fellowships.
  • As tech valuations rise to truly crazy levels, the ramifications, financial and otherwise, of a job at a pre-I.P.O. company like Dropbox or even post-I.P.O. companies like Twitter are frequently life-changing. Getting these job offers depends almost exclusively on the candidate’s performance in a series of technical interviews, where you are asked, in front of frowning hiring managers, to whip up correct and efficient code.
  • Moreover, a majority of questions seem to be pulled from undergraduate algorithms and data-structures textbooks, which older engineers may have not laid eyes on for years.
Javier E

Tech C.E.O.s Are in Love With Their Principal Doomsayer - The New York Times - 0 views

  • The futurist philosopher Yuval Noah Harari worries about a lot.
  • He worries that Silicon Valley is undermining democracy and ushering in a dystopian hellscape in which voting is obsolete.
  • He worries that by creating powerful influence machines to control billions of minds, the big tech companies are destroying the idea of a sovereign individual with free will.
  • ...27 more annotations...
  • He worries that because the technological revolution’s work requires so few laborers, Silicon Valley is creating a tiny ruling class and a teeming, furious “useless class.”
  • If this is his harrowing warning, then why do Silicon Valley C.E.O.s love him so
  • When Mr. Harari toured the Bay Area this fall to promote his latest book, the reception was incongruously joyful. Reed Hastings, the chief executive of Netflix, threw him a dinner party. The leaders of X, Alphabet’s secretive research division, invited Mr. Harari over. Bill Gates reviewed the book (“Fascinating” and “such a stimulating writer”) in The New York Times.
  • it’s insane he’s so popular, they’re all inviting him to campus — yet what Yuval is saying undermines the premise of the advertising- and engagement-based model of their products,
  • Part of the reason might be that Silicon Valley, at a certain level, is not optimistic on the future of democracy. The more of a mess Washington becomes, the more interested the tech world is in creating something else
  • he brought up Aldous Huxley. Generations have been horrified by his novel “Brave New World,” which depicts a regime of emotion control and painless consumption. Readers who encounter the book today, Mr. Harari said, often think it sounds great. “Everything is so nice, and in that way it is an intellectually disturbing book because you’re really hard-pressed to explain what’s wrong with it,” he said. “And you do get today a vision coming out of some people in Silicon Valley which goes in that direction.”
  • The story of his current fame begins in 2011, when he published a book of notable ambition: to survey the whole of human existence. “Sapiens: A Brief History of Humankind,” first released in Hebrew, did not break new ground in terms of historical research. Nor did its premise — that humans are animals and our dominance is an accident — seem a likely commercial hit. But the casual tone and smooth way Mr. Harari tied together existing knowledge across fields made it a deeply pleasing read, even as the tome ended on the notion that the process of human evolution might be over.
  • He followed up with “Homo Deus: A Brief History of Tomorrow,” which outlined his vision of what comes after human evolution. In it, he describes Dataism, a new faith based around the power of algorithms. Mr. Harari’s future is one in which big data is worshiped, artificial intelligence surpasses human intelligence, and some humans develop Godlike abilities.
  • Now, he has written a book about the present and how it could lead to that future: “21 Lessons for the 21st Century.” It is meant to be read as a series of warnings. His recent TED Talk was called “Why fascism is so tempting — and how your data could power it.”
  • At the Alphabet talk, Mr. Harari had been accompanied by his publisher. They said that the younger employees had expressed concern about whether their work was contributing to a less free society, while the executives generally thought their impact was positive
  • Some workers had tried to predict how well humans would adapt to large technological change based on how they have responded to small shifts, like a new version of Gmail. Mr. Harari told them to think more starkly: If there isn’t a major policy intervention, most humans probably will not adapt at all.
  • It made him sad, he told me, to see people build things that destroy their own societies, but he works every day to maintain an academic distance and remind himself that humans are just animals. “Part of it is really coming from seeing humans as apes, that this is how they behave,” he said, adding, “They’re chimpanzees. They’re sapiens. This is what they do.”
  • this summer, Mark Zuckerberg, who has recommended Mr. Harari to his book club, acknowledged a fixation with the autocrat Caesar Augustus. “Basically,” Mr. Zuckerberg told The New Yorker, “through a really harsh approach, he established 200 years of world peace.”
  • He said he had resigned himself to tech executives’ global reign, pointing out how much worse the politicians are. “I’ve met a number of these high-tech giants, and generally they’re good people,” he said. “They’re not Attila the Hun. In the lottery of human leaders, you could get far worse.”
  • Some of his tech fans, he thinks, come to him out of anxiety. “Some may be very frightened of the impact of what they are doing,” Mr. Harari said
  • as he spoke about meditation — Mr. Harari spends two hours each day and two months each year in silence — he became commanding. In a region where self-optimization is paramount and meditation is a competitive sport, Mr. Harari’s devotion confers hero status.
  • He told the audience that free will is an illusion, and that human rights are just a story we tell ourselves. Political parties, he said, might not make sense anymore. He went on to argue that the liberal world order has relied on fictions like “the customer is always right” and “follow your heart,” and that these ideas no longer work in the age of artificial intelligence, when hearts can be manipulated at scale.
  • Everyone in Silicon Valley is focused on building the future, Mr. Harari continued, while most of the world’s people are not even needed enough to be exploited. “Now you increasingly feel that there are all these elites that just don’t need me,” he said. “And it’s much worse to be irrelevant than to be exploited.”
  • The useless class he describes is uniquely vulnerable. “If a century ago you mounted a revolution against exploitation, you knew that when bad comes to worse, they can’t shoot all of us because they need us,” he said, citing army service and factory work.
  • Now it is becoming less clear why the ruling elite would not just kill the new useless class. “You’re totally expendable,” he told the audience.
  • This, Mr. Harari told me later, is why Silicon Valley is so excited about the concept of universal basic income, or stipends paid to people regardless of whether they work. The message is: “We don’t need you. But we are nice, so we’ll take care of you.”
  • On Sept. 14, he published an essay in The Guardian assailing another old trope — that “the voter knows best.”
  • “If humans are hackable animals, and if our choices and opinions don’t reflect our free will, what should the point of politics be?” he wrote. “How do you live when you realize … that your heart might be a government agent, that your amygdala might be working for Putin, and that the next thought that emerges in your mind might well be the result of some algorithm that knows you better than you know yourself? These are the most interesting questions humanity now faces.”
  • Today, they have a team of eight based in Tel Aviv working on Mr. Harari’s projects. The director Ridley Scott and documentarian Asif Kapadia are adapting “Sapiens” into a TV show, and Mr. Harari is working on children’s books to reach a broader audience.
  • Being gay, Mr. Harari said, has helped his work — it set him apart to study culture more clearly because it made him question the dominant stories of his own conservative Jewish society. “If society got this thing wrong, who guarantees it didn’t get everything else wrong as well?” he said
  • “If I was a superhuman, my superpower would be detachment,” Mr. Harari added. “O.K., so maybe humankind is going to disappear — O.K., let’s just observe.”
  • They just finished “Dear White People,” and they loved the Australian series “Please Like Me.” That night, they had plans to either meet Facebook executives at company headquarters or watch the YouTube show “Cobra Kai.”
Javier E

Inside the unnerving world of Silicon Valley - and how it invaded cyberspace - The Wash... - 0 views

  • “Uncanny Valley” and Joanne McNeil’s “Lurking: How a Person Became a User” defamiliarize us with the Internet as we now know it, reminding us of the human desires and ambitions that have shaped its evolution.
  • Wiener’s book is studded with sharp assessments. In San Francisco’s high-end restaurant scene, she notes, “the food was demented. . . . Food that was social media famous. Food that wanted to be.”
  • she turns to books and magazines, but finds no mental relief. Contemporary literature has taken on social media’s “curatorial affect: beautiful descriptions of little substance, arranged in elegant vignettes — gestural text, the equivalent of a rumpled linen bedsheet or a bunch of dahlias placed just so.”
  • ...10 more annotations...
  • The tech denizens wear pseudo-utilitarian garments, like the knitted, machine-washable shoes that she deems “a monument to the end of sensuousness”
  • Bizarrely, and predictably, the tech people offer tech fixes for our shredded civic fabric: “a Marshall Plan of rationality,” say, or “crowdfunding private planes to fly over red counties and drop leaflets.
  • The man-children who hire her really believe they can change the world by selling e-books, analyzing data, offering a code-sharing service.
  • Wiener has a gift for channeling Silicon Valley’s unsettling idea of perfection and for reminding us of its allure. She gets the appeal of building something “so beautiful, so necessary, so well designed that it insinuated itself into people’s lives without external pressures,” and of creating an existence “freed of decision-making, the unnecessary friction of human behavior.”
  • In “Lurking,” the tech writer Joanne McNeil also excavates recent histor
  • It’s a cheerfully digressive book, organized into chapters that each tackle some fundamental property of the Internet. The first chapter, “Search,” traces the evolution of Google and people’s relationship to inquiry. “Anonymity” revisits the early groups that homesteaded in cyberspace, while “Visibility” and “Community” take us through the sites (Friendster, Myspace, Facebook) that successively colonized it.
  • “Sharing” meditates on the circulation of images and language. “Clash” presents a brief history of online activism. “Accountability” explores how well-structured sites might contain bad actors.
  • To grasp the Internet we know today, we have to remember the freer, weirder, more innocent pseudonymity that thrived on the World Wide Web before major tech companies swallowed it whole.
  • “Lurking” is more like infinite scroll. Having picked your platform, you float on the current of content, thick with froth and detritus and the occasional treasure, until something makes you ask: Wait, what? How did I get here
  • In rewinding our recent Internet history, both books remind us of just how deeply living online has overloaded our thought patterns, installing in our hindbrains a thrumming and consta
Javier E

Opinion | Our Kids Are Living In a Different Digital World - The New York Times - 0 views

  • You may have seen the tins that contain 15 little white rectangles that look like the desiccant packs labeled “Do Not Eat.” Zyns are filled with nicotine and are meant to be placed under your lip like tobacco dip. No spitting is required, so nicotine pouches are even less visible than vaping. Zyns come in two strengths in the United States, three and six milligrams. A single six-milligram pouch is a dose so high that first-time users on TikTok have said it caused them to vomit or pass out.
  • We worry about bad actors bullying, luring or indoctrinating them online
  • I was stunned by the vast forces that are influencing teenagers. These forces operate largely unhampered by a regulatory system that seems to always be a step behind when it comes to how children can and are being harmed on social media.
  • ...36 more annotations...
  • Parents need to know that when children go online, they are entering a world of influencers, many of whom are hoping to make money by pushing dangerous products. It’s a world that’s invisible to us
  • when we log on to our social media, we don’t see what they see. Thanks to algorithms and ad targeting, I see videos about the best lawn fertilizer and wrinkle laser masks, while Ian is being fed reviews of flavored vape pens and beautiful women livestreaming themselves gambling crypto and urging him to gamble, too.
  • Smartphones are taking our kids to a different world
  • Greyson Imm, an 18-year-old high school student in Prairie Village, Kan., said he was 17 when Zyn videos started appearing on his TikTok feed. The videos multiplied through the spring, when they were appearing almost daily. “Nobody had heard about Zyn until very early 2023,” he said. Now, a “lot of high schoolers have been using Zyn. It’s really taken off, at least in our community.”
  • all of this is, unfortunately, only part of what makes social media dangerous.
  • The tobacco conglomerate Philip Morris International acquired the Zyn maker Swedish Match in 2022 as part of a strategic push into smokeless products, a category it projects could help drive an expected $2 billion in U.S. revenue in 2024.
  • P.M.I. is also a company that has long denied it markets tobacco products to minors despite decades of research accusing it of just that. One 2022 study alone found its brands advertising near schools and playgrounds around the globe.
  • the ’90s, when magazines ran full-page Absolut Vodka ads in different colors, which my friends and I collected and taped up on our walls next to pictures of a young Leonardo DiCaprio — until our parents tore them down. This was advertising that appealed to me as a teenager but was also visible to my parents, and — crucially — to regulators, who could point to billboards near schools or flavored vodka ads in fashion magazines and say, this is wrong.
  • Even the most committed parent today doesn’t have the same visibility into what her children are seeing online, so it is worth explaining how products like Zyn end up in social feeds
  • influencers. They aren’t traditional pitch people. Think of them more like the coolest kids on the block. They establish a following thanks to their personality, experience or expertise. They share how they’re feeling, they share what they’re thinking about, they share stuff they l
  • With ruthless efficiency, social media can deliver unlimited amounts of the content that influencers create or inspire. That makes the combination of influencers and social-media algorithms perhaps the most powerful form of advertising ever invented.
  • Videos like his operate like a meme: It’s unintelligible to the uninitiated, it’s a hilarious inside joke to those who know, and it encourages the audience to spread the message
  • Enter Tucker Carlson. Mr. Carlson, the former Fox News megastar who recently started his own subscription streaming service, has become a big Zyn influencer. He’s mentioned his love of Zyn in enough podcasts and interviews to earn the nickname Tucker CarlZyn.
  • was Max VanderAarde. You can glimpse him in a video from the event wearing a Santa hat and toasting Mr. Carlson as they each pop Zyns in their mouths. “You can call me king of Zynbabwe, or Tucker CarlZyn’s cousin,” he says in a recent TikTok. “Probably, what, moved 30 mil cans last year?”
  • Freezer Tarps, Mr. VanderAarde’s TikTok account, appears to have been removed after I asked the company about it. Left up are the large number of TikToks by the likes of @lifeofaZyn, @Zynfluencer1 and @Zyntakeover; those hashtagged to #Zynbabwe, one of Freezer Tarps’s favorite terms, have amassed more than 67 million views. So it’s worth breaking down Mr. VanderAarde’s videos.
  • All of these videos would just be jokes (in poor taste) if they were seen by adults only. They aren’t. But we can’t know for sure how many children follow the Nelk Boys or Freezer Tarps — social-media companies generally don’t release granular age-related data to the public. Mr. VanderAarde, who responded to a few of my questions via LinkedIn, said that nearly 95 percent of his followers are over the age of 18.
  • They’re incentivized to increase their following and, in turn, often their bank accounts. Young people are particularly susceptible to this kind of promotion because their relationship with influencers is akin to the intimacy of a close friend.
  • The helicopter video has already been viewed more than one million times on YouTube, and iterations of it have circulated widely on TikTok.
  • YouTube said it eventually determined that four versions of the Carlson Zyn videos were not appropriate for viewers under age 18 under its community guidelines and restricted access to them by age
  • Mr. Carlson declined to comment on the record beyond his two-word statement. The Nelk Boys didn’t respond to requests for comment. Meta declined to comment on the record. TikTok said it does not allow content that promotes tobacco or its alternatives. The company said that it has over 40,000 trust and safety experts who work to keep the platform safe and that it prevented teenagers’ accounts from viewing over two million videos globally that show the consumption of tobacco products by adults. TikTok added that in the third quarter of 2023 it proactively removed 97 percent of videos that violated its alcohol, tobacco and drugs policy.
  • Greyson Imm, the high school student in Prairie Village, Kan., points to Mr. VanderAarde as having brought Zyn “more into the mainstream.” Mr. Imm believes his interest in independent comedy on TikTok perhaps made him a target for Mr. VanderAarde’s videos. “He would create all these funny phrases or things that would make it funny and joke about it and make it relevant to us.”
  • It wasn’t long before Mr. Imm noticed Zyn blowing up among his classmates — so much so that the student, now a senior at Shawnee Mission East High School, decided to write a piece in his school newspaper about it. He conducted an Instagram poll from the newspaper’s account and found that 23 percent of the students who responded used oral nicotine pouches during school.
  • “Upper-decky lip cushions, ferda!” Mr. VanderAarde coos in what was one of his popular TikTok videos, which had been liked more than 40,000 times. The singsong audio sounds like gibberish to most people, but it’s actually a call to action. “Lip cushion” is a nickname for a nicotine pouch, and “ferda” is slang for “the guys.”
  • “I have fun posting silly content that makes fun of pop culture,” Mr. VanderAarde said to me in our LinkedIn exchange.
  • I turned to Influencity, a software program that estimates the ages of social media users by analyzing profile photos and selfies in recent posts. Influencity estimated that roughly 10 percent of the Nelk Boys’ followers on YouTube are ages 13 to 17. That’s more than 800,000 children.
  • I’ve spent the past three years studying media manipulation and memes, and what I see in Freezer Tarps’s silly content is strategy. The use of Zyn slang seems like a way to turn interest in Zyn into a meme that can be monetized through merchandise and other business opportunities.
  • Such as? Freezer Tarps sells his own pouch product, Upperdeckys, which delivers caffeine instead of nicotine and is available in flavors including cotton candy and orange creamsicle. In addition to jockeying for sponsorship, Mr. Carlson may also be trying to establish himself with a younger, more male, more online audience as his new media company begins building its subscriber base
  • This is the kind of viral word-of-mouth marketing that looks like entertainment, functions like culture and can increase sales
  • What’s particularly galling about all of this is that we as a society already agreed that peddling nicotine to kids is not OK. It is illegal to sell nicotine products to anyone under the age of 21 in all 50 states
  • numerous studies have shown that the younger people are when they try nicotine for the first time, the more likely they will become addicted to it. Nearly 90 percent of adults who smoke daily started smoking before they turned 18.
  • Decades later — even after Juul showed the power of influencers to help addict yet another generation of children — the courts, tech companies and regulators still haven’t adequately grappled with the complexities of the influencer economy.
  • Facebook, Instagram and TikTok all have guidelines that prohibit tobacco ads and sponsored, endorsed or partnership-based content that promotes tobacco products. Holding them accountable for maintaining those standards is a bigger question.
  • We need a new definition of advertising that takes into account how the internet actually works. I’d go so far as to propose that the courts broaden the definition of advertising to include all influencer promotion. For a product as dangerous as nicotine, I’d put the bar to be considered an influencer as low as 1,000 followers on a social-media account, and maybe if a video from someone with less of a following goes viral under certain legal definitions, it would become influencer promotion.
  • Laws should require tech companies to share data on what young people are seeing on social media and to prevent any content promoting age-gated products from reaching children’s feeds
  • hose efforts must go hand in hand with social media companies putting real teeth behind their efforts to verify the ages of their users. Government agencies should enforce the rules already on the books to protect children from exposure to addictive products,
  • I refuse to believe there aren’t ways to write laws and regulations that can address these difficult questions over tech company liability and free speech, that there aren’t ways to hold platforms more accountable for advertising that might endanger kids. Let’s stop treating the internet like a monster we can’t control. We built it. We foisted it upon our children. We had better try to protect them from its potential harms as best we can.
Javier E

What Elon Musk's 'Age of Abundance' Means for the Future of Capitalism - WSJ - 0 views

  • When it comes to the future, Elon Musk’s best-case scenario for humanity sounds a lot like Sci-Fi Socialism.
  • “We will be in an age of abundance,” Musk said this month.
  • Sunak said he believes the act of work gives meaning, and had some concerns about Musk’s prediction. “I think work is a good thing, it gives people purpose in their lives,” Sunak told Musk. “And if you then remove a large chunk of that, what does that mean?”
  • ...20 more annotations...
  • Part of the enthusiasm behind the sky-high valuation of Tesla, where he is chief executive, comes from his predictions for the auto company’s abilities to develop humanoid robots—dubbed Optimus—that can be deployed for everything from personal assistants to factory workers. He’s also founded an AI startup, dubbed xAI, that he said aims to develop its own superhuman intelligence, even as some are skeptical of that possibility. 
  • Musk likes to point to another work of Sci-Fi to describe how AI could change our world: a series of books by the late-, self-described-socialist author Iain Banks that revolve around a post-scarcity society that includes superintelligent AI. 
  • That is the question.
  • “We’re actually going to have—and already do have—a massive shortage of labor. So, I think we will have not people out of work but actually still a shortage of labor—even in the future.” 
  • Musk has cast his work to develop humanoid robots as an attempt to solve labor issues, saying there aren’t enough workers and cautioning that low birthrates will be even more problematic. 
  • Instead, Musk predicts robots will be taking jobs that are uncomfortable, dangerous or tedious. 
  • A few years ago, Musk declared himself a socialist of sorts. “Just not the kind that shifts resources from most productive to least productive, pretending to do good, while actually causing harm,” he tweeted. “True socialism seeks greatest good for all.”
  • “It’s fun to cook food but it’s not that fun to wash the dishes,” Musk said this month. “The computer is perfectly happy to wash the dishes.”
  • In the near term, Goldman Sachs in April estimated generative AI could boost the global gross domestic product by 7% during the next decade and that roughly two-thirds of U.S. occupations could be partially automated by AI. 
  • Vinod Khosla, a prominent venture capitalist whose firm has invested in the technology, predicted within a decade AI will be able to do “80% of 80%” of all jobs today.
  • “I believe the need to work in society will disappear in 25 years for those countries that adapt these technologies,” Khosla said. “I do think there’s room for universal basic income assuring a minimum standard and people will be able to work on the things they want to work on.” 
  • Forget universal basic income. In Musk’s world, he foresees something more lush, where most things will be abundant except unique pieces of art and real estate. 
  • “We won’t have universal basic income, we’ll have universal high income,” Musk said this month. “In some sense, it’ll be somewhat of a leveler or an equalizer because, really, I think everyone will have access to this magic genie.” 
  • All of which kind of sounds a lot like socialism—except it’s unclear who controls the resources in this Muskism society
  • “Digital super intelligence combined with robotics will essentially make goods and services close to free in the long term,” Musk said
  • “What is an economy? An economy is GDP per capita times capita.” Musk said at a tech conference in France this year. “Now what happens if you don’t actually have a limit on capita—if you have an unlimited number of…people or robots? It’s not clear what meaning an economy has at that point because you have an unlimited economy effectively.”
  • In theory, humanity would be freed up for other pursuits. But what? Baby making. Bespoke cooking. Competitive human-ing. 
  • “Obviously a machine can go faster than any human but we still have humans race against each other,” Musk said. “We still enjoy competing against other humans to, at least, see who was the best human.”
  • Still, even as Musk talks about this future, he seems to be grappling with what it might actually mean in practice and how it is at odds with his own life. 
  • “If I think about it too hard, it, frankly, can be dispiriting and demotivating, because…I put a lot of blood, sweat and tears into building companies,” he said earlier this year. “If I’m sacrificing time with friends and family that I would prefer but then ultimately the AI can do all these things, does that make sense?”“To some extent,” Musk concluded, “I have to have a deliberate suspension of disbelief in order to remain motivated.”
Javier E

A Future Without Jobs? Two Views of the Changing Work Force - The New York Times - 0 views

  • Eduardo Porter: I read your very interesting column about the universal basic income, the quasi-magical tool to ensure some basic standard of living for everybody when there are no more jobs for people to do. What strikes me about this notion is that it relies on a view of the future that seems to have jelled into a certainty, at least among the technorati on the West Coast
  • the economic numbers that we see today don’t support this view. If robots were eating our lunch, it would show up as fast productivity growth. But as Robert Gordon points out in his new book, “The Rise and Fall of American Growth,” productivity has slowed sharply. He argues pretty convincingly that future productivity growth will remain fairly modest, much slower than during the burst of American prosperity in mid-20th century.
  • it relies on an unlikely future. It’s not a future with a lot of crummy work for low pay, but essentially a future with little or no paid work at all.
  • ...17 more annotations...
  • The former seems to me a not unreasonable forecast — we’ve been losing good jobs for decades, while low-wage employment in the service sector has grown. But no paid work? That’s more a dream (or a nightmare) than a forecast
  • Farhad Manjoo: Because I’m scared that they’ll unleash their bots on me, I should start by defending the techies a bit
  • They see a future in which a small group of highly skilled tech workers reign supreme, while the rest of the job world resembles the piecemeal, transitional work we see coming out of tech today (Uber drivers, Etsy shopkeepers, people who scrape by on other people’s platforms).
  • Why does that future call for instituting a basic income instead of the smaller and more feasible labor-policy ideas that you outline? I think they see two reasons. First, techies have a philosophical bent toward big ideas, and U.B.I. is very big.
  • They see software not just altering the labor market at the margins but fundamentally changing everything about human society. While there will be some work, for most nonprogrammers work will be insecure and unreliable. People could have long stretches of not working at all — and U.B.I. is alone among proposals that would allow you to get a subsidy even if you’re not working at all
  • If there are, in fact, jobs to be had, a universal basic income may not be the best choice of policy. The lack of good work is probably best addressed by making the work better — better paid and more skilled — and equipping workers to perform it,
  • The challenge of less work could just lead to fewer working hours. Others are already moving in this direction. People work much less in many other rich countries: Norwegians work 20 percent fewer hours per year than Americans; Germans 25 percent fewer.
  • Farhad Manjoo: One key factor in the push for U.B.I., I think, is the idea that it could help reorder social expectations. At the moment we are all defined by work; Western society generally, but especially American society, keeps social score according to what people do and how much they make for it. The dreamiest proponents of U.B.I. see that changing as work goes away. It will be O.K., under this policy, to choose a life of learning instead of a low-paying bad job
  • Eduardo Porter: To my mind, a universal basic income functions properly only in a world with little or no paid work because the odds of anybody taking a job when his or her needs are already being met are going to be fairly low.
  • The discussion, I guess, really depends on how high this universal basic income would be. How many of our needs would it satisfy?
  • You give the techies credit for seriously proposing this as an optimal solution to wrenching technological and economic change. But in a way, isn’t it a cop-out? They’re just passing the bag to the political system. Telling Congress, “You fix it.
  • the idea of the American government agreeing to tax capitalists enough to hand out checks to support the entire working class is in an entirely new category of fantasy.
  • paradoxically, they also see U.B.I. as more politically feasible than some of the other policy proposals you call for. One of the reasons some libertarians and conservatives like U.B.I. is that it is a very simple, efficient and universal form of welfare — everyone gets a monthly check, even the rich, and the government isn’t going to tell you what to spend it on. Its very universality breaks through political opposition.
  • Eduardo Porter: I guess some enormous discontinuity right around the corner might vastly expand our prosperity. Joel Mokyr, an economic historian that knows much more than I do about the evolution of technology, argues that the tools and techniques we have developed in recent times — from gene sequencing to electron microscopes to computers that can analyze data at enormous speeds — are about to open up vast new frontiers of possibility. We will be able to invent materials to precisely fit the specifications of our homes and cars and tools, rather than make our homes, cars and tools with whatever materials are available.
  • The question is whether this could produce another burst of productivity like the one we experienced between 1920 and 1970, which — by the way — was much greater than the mini-productivity boom produced by information technology in the 1990s.
  • investors don’t seem to think so. Long-term interest rates have been gradually declining for a fairly long time. This would suggest that investors do not expect a very high rate of return on their future investments. R.&D. intensity is slowing down, and the rate at which new businesses are formed is also slowing.
  • Little in these dynamics suggests a high-tech utopia — or dystopia, for that matter — in the offing
julia rhodes

Yanukovych Says He Was 'Wrong' on Crimea - NYTimes.com - 0 views

  • n his first interview since fleeing to Russia, Ukraine's ousted president said Wednesday that he was "wrong" to have invited Russian troops into Crimea and vowed to try to persuade Russia to return the coveted Black Sea peninsula.
  • Yanukovych denied the allegations of corruption, saying he built his palatial residence outside of Kiev, the Ukrainian capital, with his own money. He also denied responsibility for the sniper deaths of about 80 protesters in Kiev in February, for which he has been charged by Ukraine's interim government.
  • While Russia can hardly be expected to roll back its annexation, Yanukovych's statement could widen Putin's options in the talks on settling the Ukrainian crisis by creating an impression that Moscow could be open for discussions on Crimea's status in the future.
  • ...2 more annotations...
  • "I was wrong," he said. "I acted on my emotions."
  • Yanukovych did not answer several questions about whether he would support Russia — which has deployed tens of thousands of troops near the Ukrainian border — moving into Ukraine to protect ethnic Russians, the justification Putin used to take Crimea.Continue reading the main story Why movie streaming sites so fail to satisfy Also in Tech » Apple's war on Samsung has Google in crossfire At Mozilla, a chief's support of gay marriage ban causes conflict Continue reading the main story Advertisement (adsbygoogle = window.adsbygoogle || []).push({});
Javier E

In Silicon Valley, Auto Racing Becomes a Favorite Hobby for Tech Elites - NYTimes.com - 0 views

  • Mr. Buckler’s team fields drivers in more than a dozen races a year, and he calls strategy in each of them. But racing is only half of his business. He also owns a winery in Petaluma, north of San Francisco, and he has sought out connections with the tech industry in order to turn racing into the new great nexus for business networking, or what Mr. Buckler calls “relationship marketing.”
  • “These Silicon Valley companies tell me that they’ve got skyboxes at the Raiders, the Giants, the 49ers for their clients, but they can’t fill them,” Mr. Buckler explained when he wasn’t barking calls over a headset to his drivers. “We let you invite your customers to Laguna Seca Raceway for a morning, where they’ll get professional instruction driving Aston Martin racecars, and then we wrap up with a nice dinner or wine tasting,” he said. “Well, they’re full, everyone wants to go.”
  • aside from their disproportionate number of $90,000 Tesla Model S cars, which are one of the few socially acceptable displays of wealth in the industry, the parking lots of Silicon Valley’s tech giants are generally indistinguishable from the parking lots of most blue-state office parks. Mark Zuckerberg, for example, drives a Volkswagen GTI. It’s not unusual to hear techies profess their disinterest in cars.
  • ...6 more annotations...
  • His newfound interest in cars and racing, he said, was in some ways connected to his interest in tech.
  • “It’s a fiddly technological skill that you can always improve on,” Mr. Schachter said. “The same kind of guy who might be upgrading the video card on their computer for better performance might also be upgrading their car.” There’s also the visceral thrill. “When you make software, it’s an unreal product,” he said. “Building something physical is attractive in different ways.”
  • Then there’s the fact that cars are becoming much more like computers. Racecars now carry something like an automotive Fitbit, sophisticated sensors that precisely measure just about everything that’s happening on the track, from G-force to where drivers are braking and accelerating. All this data can be tracked and analyzed, turning racing into a sport of empiricism as much as of instinct.
  • At some point during every conversation I had with a tech guy who is interested in racing, there would come an awkward moment in which he would ask me not to paint him as an extravagant, sexist cretin. Mr. Schachter told me, “Try to tone down the rich guy hobby thing.
  • Mr. Bonforte said that many of his friends preferred to stay silent about racing because “the things that make us smell like the 1 percent, we’re very nervous about.” He added that while he has invited several women to come to the track, none had accepted his offer. The rise of a new boys’ club in Silicon Valley — one that was apparently leading to new deals and other business prizes — was “a totally valid concern,
  • Mr. Schachter pointed out that the most popular car for racing enthusiasts is a Mazda Miata, older models of which sell for less than $5,000. Renting a car for a day on the track costs a few hundred.
Javier E

Silicon Valley Has Not Saved Us From a Productivity Slowdown - The New York Times - 0 views

  • In mature economies, higher productivity typically is required for sustained increases in living standards, but the productivity numbers in the United States have been mediocre. Labor productivity has been growing at an average of only 1.3 percent annually since the start of 2005, compared with 2.8 percent annually in the preceding 10 years
  • Marc Andreessen, the Silicon Valley entrepreneur and venture capitalist, says information technology is providing significant benefits that just don’t show up in the standard measurements of wages and productivity. Consider that consumers have access to services like Facebook, Google and Wikipedia free of charge, and those benefits aren’t fully accounted for in the official numbers. This notion — that life is getting better, often in ways we are barely measuring — is fairly common in tech circles.
  • Chad Syverson, a professor of economics at the University of Chicago Booth School of Business, has looked more scientifically at the evidence and concluded that the productivity slowdown is all too real
  • ...4 more annotations...
  • An additional problem for the optimistic interpretation is this: The productivity slowdown is too big in scale, relative to the size of the tech sector, to be plausibly compensated for by tech progress.
  • Basically, under a conservative estimate, as outlined by Professor Syverson, the productivity slowdown has led to a cumulative loss of $2.7 trillion in gross domestic product since the end of 2004; that is how much more output would have been produced had the earlier rate of productivity growth been maintained. To make up for this difference, Professor Syverson estimates, consumer surplus (consumer benefits in excess of market price) would have to be five times as high as measured in the industries that produce and service information and communications technology. That seems implausibly large as a measurement gap
  • The tech economy just isn’t big enough to account for the productivity gap. That gap has caused measured G.D.P. to be about 15 percent lower than it would have been otherwise, yet digital technology industries were only about 7.7 percent of G.D.P. in 2004. Even if the free component of the Internet has become more important since 2004, it’s hard to imagine that it is so much better now that it accounts for such a big proportion of G.D.P.
  • America’s productivity crisis is real and it is continuing. While information technology remains the most likely source of future breakthroughs, Silicon Valley has not saved us just yet.
Javier E

How a half-educated tech elite delivered us into evil | John Naughton | Opinion | The G... - 0 views

  • We have a burgeoning genre of “OMG, what have we done?” angst coming from former Facebook and Google employees who have begun to realise that the cool stuff they worked on might have had, well, antisocial consequences.
  • what Google and Facebook have built is a pair of amazingly sophisticated, computer-driven engines for extracting users’ personal information and data trails, refining them for sale to advertisers in high-speed data-trading auctions that are entirely unregulated and opaque to everyone except the companies themselves.
  • The purpose of this infrastructure was to enable companies to target people with carefully customised commercial messages
  • ...10 more annotations...
  • in doing this, Zuckerberg, Google co-founders Larry Page and Sergey Brin and co wrote themselves licences to print money and build insanely profitable companies.
  • It never seems to have occurred to them that their advertising engines could also be used to deliver precisely targeted ideological and political messages to voters.
  • Hence the obvious question: how could such smart people be so stupid? The cynical answer is they knew about the potential dark side all along and didn’t care, because to acknowledge it might have undermined the aforementioned licences to print money.
  • Now mathematics, engineering and computer science are wonderful disciplines – intellectually demanding and fulfilling. And they are economically vital for any advanced society. But mastering them teaches students very little about society or history – or indeed about human nature.
  • So what else could explain the astonishing naivety of the tech crowd? My hunch is it has something to do with their educational backgrounds. Take the Google co-founders. Sergey Brin studied mathematics and computer science. His partner, Larry Page, studied engineering and computer science. Zuckerberg dropped out of Harvard, where he was studying psychology and computer science, but seems to have been more interested in the latter.
  • Which is another way of saying that most tech leaders are sociopaths. Personally I think that’s unlikely
  • As a consequence, the new masters of our universe are people who are essentially only half-educated. They have had no exposure to the humanities or the social sciences, the academic disciplines that aim to provide some understanding of how society works, of history and of the roles that beliefs, philosophies, laws, norms, religion and customs play in the evolution of human culture.
  • “a liberal arts major familiar with works like Alexis de Tocqueville’s Democracy in America, John Stuart Mill’s On Liberty, or even the work of ancient Greek historians, might have been able to recognise much sooner the potential for the ‘tyranny of the majority’ or other disconcerting sociological phenomena that are embedded into the very nature of today’s social media platforms.
  • While seemingly democratic at a superficial level, a system in which the lack of structure means that all voices carry equal weight, and yet popularity, not experience or intelligence, actually drives influence, is clearly in need of more refinement and thought than it was first given.”
  • All of which brings to mind CP Snow’s famous Two Cultures lecture, delivered in Cambridge in 1959, in which he lamented the fact that the intellectual life of the whole of western society was scarred by the gap between the opposing cultures of science and engineering on the one hand, and the humanities on the other – with the latter holding the upper hand among contemporary ruling elites.
Javier E

Opinion | What Years of Emails and Texts Reveal About Your Friendly Tech Companies - Th... - 0 views

  • he picture that emerges from these documents is not one of steady entrepreneurial brilliance. Rather, at points where they might have been vulnerable to hotter, newer start-ups, Big Tech companies have managed to avoid the rigors of competition. Their two main tools — buying their way out of the problem and a willingness to lose money — are both made possible by sky-high Wall Street valuations, which go only higher with acquisitions of competitors, fueling a cycle of enrichment and consolidation of power
  • As Mr. Zuckerberg bluntly boasted in an email, because of its immense wealth Facebook “can likely always just buy any competitive start-ups.”
  • The greater scandal here may be that the federal government has let these companies get away with this
  • ...2 more annotations...
  • the government in the 2010s allowed more than 500 start-up acquisitions to go unchallenged. This hands-off approach effectively gave tech executives a green light to consolidate the industry.
  • It may be profitable and savvy to eliminate rivals to maintain a monopoly, but it remains illegal in this country under the Sherman Antitrust Act and Standard Oil v. United States. Unless we re-establish that legal fact, Big Tech will continue to fight dirty and keep on winning.
woodlu

Facebook flounders in the court of public opinion | The Economist - 2 views

  • “YOU ARE a 21st-century American hero,” gushed Ed Markey, a Democratic senator from Massachusetts. He was not addressing the founder of one of the country’s largest companies, Facebook, but the woman who found fault with it
  • Frances Haugen, who had worked at the social-media giant before becoming a whistleblower, testified in front of a Senate subcommittee for over three hours on October 5th, highlighting Facebook’s “moral bankruptcy” and the firm’s downplaying of its harmful impact, including fanning teenage depression and ethnic violence.
  • Facebook’s own private research, for example, found that its photo-sharing site, Instagram, worsened teens’ suicidal thoughts and eating disorders. Yet it still made a point of sending young users engaging content that stoked their anxiety—while proceeding to develop a version of its site for those under the age of 13.
  • ...14 more annotations...
  • In 2018 a different whistleblower outed Facebook for its sketchy collaboration with Cambridge Analytica, a research organisation that allowed users’ data to be collected without their consent and used for political profiling by Donald Trump’s campaign. Facebook’s founder, Mark Zuckerberg, went to Washington, DC to apologise, and in 2019 America’s consumer-protection agency, the Federal Trade Commission, agreed to a $5bn settlement with Facebook. That is the largest fine ever levied against a tech firm.
  • Congress has repeatedly called in tech bosses for angry questioning and public shaming without taking direct action afterwards.
  • Senators, who cannot agree on such uncontroversial things as paying for the government’s expenses, united against a common enemy and promised Ms Haugen that they would hold Facebook to account.
  • Social media’s harmful effects on children and teenagers is a concern that transcends partisanship and is easier to understand than sneaky data-gathering, viral misinformation and other social-networking sins.
  • If Congress does follow through with legislation, it is likely to focus narrowly on protecting children online, as opposed to broader reforms, for which there is still no political consensus.
  • Congress could update and strengthen the Children’s Online Privacy Protection Act (COPPA), which was passed in 1998 and bars the collection of data from children under the age of 13.
  • Other legislative proposals take aim at manipulative marketing and design features that make social media so addictive for the young.
  • However, Ms Haugen’s most significant impact on big tech may be inspiring others to come forward and blow the whistle on their employers’ malfeasance.
  • “A case like this one opens the floodgates and will trigger hundreds more cases,” predicts Steve Kohn, a lawyer who has represented several high-profile whistleblowers.
  • One is the industry’s culture of flouting rules and a history of non-compliance. Another is a legal framework that makes whistleblowing less threatening and more attractive than it used to be.
  • The Dodd-Frank Act, which was enacted in 2010, gives greater protections to whistleblowers by preventing retaliation from employers and by offering rewards to successful cases of up to 10-30% of the money collected from sanctions against a firm.
  • If the threat of public shaming encourages corporate accountability, that is a good thing. But it could also make tech firms less inclusive and transparent, predicts Matt Perault, a former Facebook executive who is director of the Centre for Technology Policy at the University of North Carolina at Chapel Hill.
  • People may become less willing to share off-the-wall ideas if they worry about public leaks; companies may become less open with their staff; and executives could start including only a handful of trusted senior staff in meetings that might have otherwise been less restricted.
  • Facebook and other big tech firms, which have been criticised for violating people’s privacy online, can no longer count on any privacy either.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

The future belongs to Right-wing progressives - UnHerd - 0 views

  • the only subset of Right-wing thought in the West today that doesn’t feel moribund is actively anti-conservative. The liveliest corner of the Anglophone Right is scornful of cultural conservatism and nostalgia, instead combining an optimistic view of technology with a qualified embrace of global migration and an uncompromising approach to public order.
  • in much the same way as the Western Left seized on Venezuela under Chávez as a totemic worked example of this vision, so too the radical Right today has its template for the future: El Salvador under Nayib Bukele
  • These moves have drastically reduced the murder rate in a previously notoriously dangerous country
  • ...22 more annotations...
  • Since coming to power in 2019, Bukele has declared a still-to-be-rescinded state of exception, suspended the Salvadorean constitution, and locked up some 70,000 alleged gang members without due process.
  • Western critics, though, point to allegations that he has corrupted institutions by packing them with allies, not to mention, according to Amnesty International, “concealed and distorted public information, backed actions to undermine civic space, militarised public security, and used mass arrests and imprisonment as the sole strategies for counteracting violence in the country”.
  • yet, Bukele’s strongman tactics have made him wildly popular with Salvadoreans, who doubtless enjoy a reported 70% reduction in the country’s previously extremely high murder rate. They have also made Bukele a rock star for the online Right. This group, fond of complaining about spineless leaders, fraying Western law and order, and the bleeding-away of political agency into international institutions and NGOs, regards the spectacle of a strongman leader with good social media game as something like a fantasy made flesh.
  • Arguably, it’s as much his embrace of technology that accords Bukele the mantle of poster-boy for a futuristic Right. Whether in his extremely online presence, his (admittedly not completely successful) embrace of Bitcoin as legal tender, or the high-tech, recently rebuilt National Library, funded by Beijing and serving more as showcase for futuristic technologies than as reading-room
  • This trait also makes him a touchstone for the Right-wing movement that I predict will replace “conservatism” in the 21st century. This outlook owes more to the Italian Futurist Filippo Marinetti than conservatives of the G.K. Chesterton variety
  • is perhaps most visibly embodied in American technologists such as Elon Musk, Mark Andreessen or Peter Thiel. As a worldview, it is broadly pro-capitalist, enthusiastically pro-technology and unabashedly hierarchical, as well as sometimes also scornful of Christian-inflected concern for the weak.
  • We might call it, rudely, “space fascism”, though N.S. Lyons’s formulation “Right-wing progressivism” is probably more accurate. Among its adherents, high-tech authoritarianism is a feature, not a bug, and egalitarianism is for fools. Thinkers such as Curtis Yarvin propose an explicitly neo-monarchical model for governance; Thiel has declared that: “I no longer believe freedom and democracy are compatible.”
  • El Salvador is thus the most legible real-world instance of something like a Right-wing progressive programme in practice. And along with the tech enthusiasm and public-order toughness, the third distinctive feature of this programme can be gleaned: a desire not to end international migration, but to restrict it to elites.
  • For Right-wing progressives, polities are not necessarily premised on ethnic or cultural homogeneity — at least not for elites. Rather, this is a vision of statehood less based on affinity, history or even ethnicity, and more on a kind of opt-in, utility-maximisation model
  • As for those still wedded to the 20th-century idea that being Right-wing necessarily means ethnicity-based nationalism, they are likely to find this outlook bewildering.
  • Right-wing progressives generally accord greater political value to gifted, high-productivity foreigners than any slow-witted, unproductive coethnic: those within Right-wing progressive circles propose, and in some cases are already working on, opt-in startup cities and “network states” that would be, by definition, highly selective about membership.
  • As a worldview, it’s jarring to cultural conservatives, who generally value thick ties of shared history and affinity
  • Yet it’s still more heretical to egalitarian progressives, for whom making migration and belonging an elite privilege offends every premise of inclusion and social justice.
  • Right-wing progressives, by contrast, propose to learn from the immigration policies of polities such as Singapore and the Gulf states, and avert the political challenges posed by ethnic voting blocs by imposing tiered citizenship for low-skilled migrants, while courting the wealth and productivity of international elites
  • Bukele’s proposal suggests a pragmatic two-tier Right-wing progressive migration policy that courts rich, productive, geographically rootless international “Anywheres” of the kind long understood to have more affinity with one another than with less wealthy and more rooted “Somewheres” — but to do so while explicitly protecting cultural homogeneity on behalf of the less-mobile masses.
  • There are larger structural reasons for such pragmatism, not least that population growth is slowing or going into reverse across most of the planet.
  • At the same time, impelled by easier transportation, climate change, social-media promises of better lives elsewhere, and countless other reasons, people everywhere are on the move. As such, like a global game of musical chairs, a battle is now on for who ends up where, once the music stops — and on what terms.
  • How do you choose who is invited? And how do you keep unwanted demographics out? Within an egalitarian progressive framework, these are simply not questions that one may ask
  • Within the older, cultural conservative framework, meanwhile, all or most migration is viewed with suspicion.
  • The Right-wing progressive framework, by contrast, is upbeat about migration — provided it’s as discerning as possible, ideally granting rights only to elite incomers and filtering others aggressively by demographics, for example an assessment of the statistical likeliho
  • od of committing crime or making a net economic contribution.
  • In Britain, meanwhile, whatever happens to the Tories, I suspect we’ll see more of the Right-wing progressives. I find many of their policies unnerving, especially on the biotech side; but theirs is a political subculture with optimism and a story about the future, two traits that go a long way in politics.
1 - 20 of 151 Next › Last »
Showing 20 items per page