Skip to main content

Home/ History Readings/ Group items tagged investor

Rss Feed Group items tagged

Javier E

The super-rich 'preppers' planning to save themselves from the apocalypse | The super-r... - 0 views

  • at least as far as these gentlemen were concerned, this was a talk about the future of technology.
  • Taking their cue from Tesla founder Elon Musk colonising Mars, Palantir’s Peter Thiel reversing the ageing process, or artificial intelligence developers Sam Altman and Ray Kurzweil uploading their minds into supercomputers, they were preparing for a digital future that had less to do with making the world a better place than it did with transcending the human condition altogether. Their extreme wealth and privilege served only to make them obsessed with insulating themselves from the very real and present danger of climate change, rising sea levels, mass migrations, global pandemics, nativist panic and resource depletion. For them, the future of technology is about only one thing: escape from the rest of us.
  • These people once showered the world with madly optimistic business plans for how technology might benefit human society. Now they’ve reduced technological progress to a video game that one of them wins by finding the escape hatch.
  • ...13 more annotations...
  • these catastrophising billionaires are the presumptive winners of the digital economy – the supposed champions of the survival-of-the-fittest business landscape that’s fuelling most of this speculation to begin with.
  • What I came to realise was that these men are actually the losers. The billionaires who called me out to the desert to evaluate their bunker strategies are not the victors of the economic game so much as the victims of its perversely limited rules. More than anything, they have succumbed to a mindset where “winning” means earning enough money to insulate themselves from the damage they are creating by earning money in that way.
  • Never before have our society’s most powerful players assumed that the primary impact of their own conquests would be to render the world itself unliveable for everyone else
  • Nor have they ever before had the technologies through which to programme their sensibilities into the very fabric of our society. The landscape is alive with algorithms and intelligences actively encouraging these selfish and isolationist outlooks. Those sociopathic enough to embrace them are rewarded with cash and control over the rest of us. It’s a self-reinforcing feedback loop. This is new.
  • So far, JC Cole has been unable to convince anyone to invest in American Heritage Farms. That doesn’t mean no one is investing in such schemes. It’s just that the ones that attract more attention and cash don’t generally have these cooperative components. They’re more for people who want to go it alone
  • C is no hippy environmentalist but his business model is based in the same communitarian spirit I tried to convey to the billionaires: the way to keep the hungry hordes from storming the gates is by getting them food security now. So for $3m, investors not only get a maximum security compound in which to ride out the coming plague, solar storm, or electric grid collapse. They also get a stake in a potentially profitable network of local farm franchises that could reduce the probability of a catastrophic event in the first place. His business would do its best to ensure there are as few hungry children at the gate as possible when the time comes to lock down.
  • Most billionaire preppers don’t want to have to learn to get along with a community of farmers or, worse, spend their winnings funding a national food resilience programme. The mindset that requires safe havens is less concerned with preventing moral dilemmas than simply keeping them out of sight.
  • Rising S Company in Texas builds and installs bunkers and tornado shelters for as little as $40,000 for an 8ft by 12ft emergency hideout all the way up to the $8.3m luxury series “Aristocrat”, complete with pool and bowling lane. The enterprise originally catered to families seeking temporary storm shelters, before it went into the long-term apocalypse business. The company logo, complete with three crucifixes, suggests their services are geared more toward Christian evangelist preppers in red-state America than billionaire tech bros playing out sci-fi scenarios.
  • Ultra-elite shelters such as the Oppidum in the Czech Republic claim to cater to the billionaire class, and pay more attention to the long-term psychological health of residents. They provide imitation of natural light, such as a pool with a simulated sunlit garden area, a wine vault, and other amenities to make the wealthy feel at home.
  • On closer analysis, however, the probability of a fortified bunker actually protecting its occupants from the reality of, well, reality, is very slim. For one, the closed ecosystems of underground facilities are preposterously brittle. For example, an indoor, sealed hydroponic garden is vulnerable to contamination. Vertical farms with moisture sensors and computer-controlled irrigation systems look great in business plans and on the rooftops of Bay Area startups; when a palette of topsoil or a row of crops goes wrong, it can simply be pulled and replaced. The hermetically sealed apocalypse “grow room” doesn’t allow for such do-overs.
  • while a private island may be a good place to wait out a temporary plague, turning it into a self-sufficient, defensible ocean fortress is harder than it sounds. Small islands are utterly dependent on air and sea deliveries for basic staples. Solar panels and water filtration equipment need to be replaced and serviced at regular intervals. The billionaires who reside in such locales are more, not less, dependent on complex supply chains than those of us embedded in industrial civilisation.
  • If they wanted to test their bunker plans, they’d have hired a security expert from Blackwater or the Pentagon. They seemed to want something more. Their language went far beyond questions of disaster preparedness and verged on politics and philosophy: words such as individuality, sovereignty, governance and autonomy.
  • it wasn’t their actual bunker strategies I had been brought out to evaluate so much as the philosophy and mathematics they were using to justify their commitment to escape. They were working out what I’ve come to call the insulation equation: could they earn enough money to insulate themselves from the reality they were creating by earning money in this way? Was there any valid justification for striving to be so successful that they could simply leave the rest of us behind –apocalypse or not?
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
Javier E

Why America's Floors Turned Gray - The Atlantic - 0 views

  • you wrote, “Can I interest you in my grand unified theory of the U.S. housing market as explained by gray vinyl plank flooring and barn doors.” Tell us your theory.
  • Amanda Mull: These types of doors and flooring (basically, fake wood with gray finishes) are particularly popular among people who are redoing homes as investments, either house flippers or landlords.
  • Gray finishes are pretty cheap, and they have a big potential upside in the rental or resale market, because that’s what people see when they enter a home. And gray floors have not been popular at any point before the past 10 or so years, so if you as a renter or buyer walk into a home and see gray floors, you’re like, “Oh, somebody has just redone this place.” It gives it that feeling of newness.
  • ...10 more annotations...
  • Isabel: How did the feeling of newness—even in a place that’s not actually new—become such an important part of interior design?
  • Amanda: Newness is really important in American consumer life, especially in the past 15 years. We’ve seen across consumer categories this emphasis on having the latest and greatest. Most people are familiar with this in the arena of fast fashion. The things you have feel disposable, because they cost very little on a per-piece basis, and there’s a constant barrage of new stuff available that’s also very inexpensive
  • ou get to the point where it feels like having something for a long time is a chump’s game.
  • In the housing space, the opposite has happened. We as a country have really slowed down in building new housing, and that has created price issues
  • Housing is very expensive, and what you get for your money is worsening. When homes are old, and the buying or renting public is used to newness, if you can create a sense of newness inside these older homes, you can charge more
  • that ends up being surface-level stuff that does not enhance the livability of the home and doesn’t even necessarily make it a more aesthetically pleasing space.
  • Amanda: What people are trying to do when they look at a place where they might live is just to figure out if it’s functional, and that can be difficult to evaluate on the surface level. So people tend to look around and think, Okay, well, the appliances are new, the floors are new, this stuff should hold for a while.
  • Because of the precarious position that a lot of people are in with housing in the U.S., and because of how hard it can be to get your offer accepted, you have this sense of scarcity. In those situations, some gray floors and a tile backsplash, and you’re like, Okay, somebody did something to this; let’s write an offer or apply before someone else sees it.
  • Isabel: You write that “all told, nearly a third of American house sales last year went to people who had no intention of living in them.” How is the current economic moment affecting the trend of house flipping?
  • Amanda: I don’t think it’s overstating it to say that gray floors are a physical manifestation of the economic realities of American life. For a lot of people, homeownership is a path to financial stability, and it’s the path that’s most common in America. Because housing is a good investment, a lot of people are interested in it who aren’t interested in living in those homes that they buy: Especially since the United States is not building a lot more housing, it’s a really attractive asset for institutional investors, property managers, and flippers. There are a lot of people dissatisfied with their careers and wages looking for something else to do that is cash positive.
Javier E

As Xi Heads to San Francisco, Chinese Propaganda Embraces America - The New York Times - 0 views

  • Now, the tone used to discuss the United States has suddenly shifted
  • Xinhua, the state news agency, on Monday published a lengthy article in English about the “enduring strength” of Mr. Xi’s affection for ordinary American
  • “More delightful moments unfolded when Xi showed up to watch an N.B.A. game,” the article continued, describing a visit by Mr. Xi to the United States in 2012. “He remained remarkably focused on the game.”
  • ...8 more annotations...
  • Separately, Xinhua has published a five-part series in Chinese on “Getting China-U.S. Relations Back on Track.
  • Beijing, in particular, may be motivated to play up the meeting to reassure investors and foreign businesses
  • On Guancha.cn, a nationalistic news and commentary site, columnists have noted that both countries are making short-term concessions for their own long-term strategic gain.
  • many Chinese social media users have taken note of the abrupt turn — and have been left reeling, or at least wryly amused
  • Under another post showing true, recent state media editorials promoting U.S.-China relations, a commenter wrote: “So, going forward, do we or don’t we need to hate America? So unclear.”
  • “Propaganda of this type is not meant for persuasion — it is not persuasive at all,” Professor Chen said. “It is mainly designed for signaling, in the hope that recipients will get the signal and implement the proper response, which is investment, or resumption of exchanges.”
  • Even the most flowery Chinese articles have drawn distinctions between warm ties between American and Chinese people, and their governments; some state media outlets have continued to warn that the outcome of the California meeting will hinge on the United States, in line with Beijing’s stance that the strained relationship is entirely Washington’s fault.
  • On the future of U.S.-China relations, Professor Wang wrote, “I am only cautious, not optimistic.”
Javier E

Will the Profit Motive Fail Us on AI Safety? - WSJ - 0 views

  • The mission of a for-profit company is, well, profit, the greatest return for investors. That’s the profound ethical crisis at the heart of artificial general intelligence development (“Capitalism Works, Says ChatGPT” by Holman Jenkins, Jr., Business World, Nov. 22).
  • it can sound naive to say that AI “won’t soon replace the human knack for synthesizing the most valuable insight from a welter of facts.” This seems to be exactly the goal of many transhumanists and the global elite. The speed at which this technology is developing means that it could be a dream or a nightmare in five years. If the controlling factor is mere profit, look for the nightmare.
Javier E

Opinion | Biden Trade Policy Breaks With Tech Giants - The New York Times - 0 views

  • One reason that the idea of free trade has fallen out of fashion in recent years is the perception that trade agreements reflect the wishes of big American corporations, at everybody else’s expense.
  • U.S. officials fought for trade agreements that protect intellectual property — and drug companies got the chance to extend the life of patents, raising the price of medicine around the world. U.S. officials fought for investor protections — and mining companies got the right to sue for billions in “lost profit” if a country moved to protect its drinking water or the Amazon ecosystem. And for years, U.S. officials have fought for digital trade rules that allow data to move freely across national borders — prompting fears that the world’s most powerful tech companies would use those rules to stay ahead of competitors and shield themselves from regulations aimed at protecting consumers and privacy.
  • That’s why the Biden administration, which came into office promising to fight for trade agreements that better reflect the interests of ordinary people, has dropped its advocacy for tech-friendly digital trade rules that American officials have championed for more than a decade.
  • ...14 more annotations...
  • Last month, President Biden’s trade representative, Katherine Tai, notified the World Trade Organization that the American government no longer supported a proposal it once spearheaded that would have exported the American laissez-faire approach to tech. Had that proposal been adopted, it would have spared tech companies the headache of having to deal with many different domestic laws about how data must be handled, including rules mandating that it be stored or analyzed locally. It also would have largely shielded tech companies from regulations aimed at protecting citizens’ privacy and curbing monopolistic behavior.
  • The move to drop support for that digital trade agenda has been pilloried as disaster for American companies and a boon to China, which has a host of complicated restrictions on transferring data outside of China. “We have warned for years that either the United States would write the rules for digital trade or China would,” Senator Mike Crapo, a Republican from Idaho, lamented in a press statement. “Now, the Biden administration has decided to give China the pen.”
  • While some of this agenda is reasonable and good for the world — too much regulation stifles innovation — adopting this agenda wholesale would risk cementing the advantages that big American tech companies already enjoy and permanently distorting the market in their favor.
  • who used to answer the phone and interact with lobbyists at the U.S. trade representative’s office. The paper includes redacted emails between Trump-era trade negotiators and lobbyists for Facebook, Google, Microsoft and Amazon, exchanging suggestions for the proposed text for the policy on digital trade in the United States-Mexico-Canada Agreement. “While they were previously ‘allergic to Washington,’ as one trade negotiator described, over the course of a decade, technology companies hired lobbyists and joined trade associations with the goal of proactively influencing international trade policy,” Ms. Li wrote in the Socio-Economic Review.
  • That paper explains how U.S. trade officials came to champion a digital trade policy agenda that was nearly identical to what Google, Apple and Meta wanted: No restrictions on the flow of data across borders. No forced disclosure of source codes or algorithms in the normal course of business. No laws that would curb monopolies or encourage more competition — a position that is often cloaked in clauses prohibiting discrimination against American companies. (Since so many of the monopolistic big tech players are American, rules targeting such behavior disproportionately fall on American companies, and can be portrayed as unfair barriers to trade.)
  • This approach essentially takes the power to regulate data out of the hands of governments and gives it to technology companies, according to research by Henry Gao, a Singapore-based expert on international trade.
  • The truth is that Ms. Tai is taking the pen away from Meta, Google and Amazon, which helped shape the previous policy, according to a research paper published this year by Wendy Li,
  • Many smaller tech companies complain that big players engage in monopolistic behavior that should be regulated. For instance, Google has been accused of privileging its own products in search results, while Apple has been accused of charging some developers exorbitant fees to be listed in its App Store. A group of smaller tech companies called the Coalition for App Fairness thanked Ms. Tai for dropping support for the so-called tech-friendly agenda at the World Trade Organization.
  • Still, Ms. Tai’s reversal stunned American allies and foreign business leaders and upended negotiations over digital trade rules in the Indo-Pacific Economic Framework, one of Mr. Biden’s signature initiatives in Asia.
  • The about-face was certainly abrupt: Japan, Singapore and Australia — which supported the previous U.S. position — were left on their own. It’s unfortunate that U.S. allies and even some American officials were taken by surprise. But changing stances was the right call.
  • The previous American position at the World Trade Organization was a minority position. Only 34 percent of countries in the world have open data transfer policies like the United States, according to a 2021 World Bank working paper, while 57 percent have adopted policies like the European Union’s, which allow data to flow freely but leave room for laws that protect privacy and personal data.
  • Nine percent of countries have restrictive data transfer policies, including Russia and China.
  • The United States now has an opportunity to hammer out a sensible global consensus that gives tech companies what they need — clarity, more universal rules, and relative freedom to move data across borders — without shielding them from the kinds of regulations that might be required to protect society and competition in the future.
  • If the Biden administration can shepherd a digital agreement that strikes the right balance, there’s a chance that it will also restore faith in free trade by showing that trade agreements don’t have to be written by the powerful at the expense of the weak.
Javier E

Elon Musk's Outlook on Our Future Turns Dour - WSJ - 0 views

  • these days, Musk sounds worried—about everything from cyclical business jitters to existential global concerns.
  • his past week he warned during a forum on X about “civilizational risk” stemming from the Israel-Hamas war cascading into a wider conflict that would pit the U.S. against a united China, Russia and Iran. “I think we are sleepwalking our way into World War III,”
  • over the years, Musk has framed his business endeavors as striving to prevent calamity, a motivating ideal that helps inspire employees, investors and fans while inducing eye rolls among critics and rivals.
  • ...14 more annotations...
  • For him, Tesla is about trying to save humanity from global warming while SpaceX is about making humanity a multiplanetary species in case things don’t work out on Earth.
  • He said he worried that activating Starlink then would have further stoked the conflict. “I think if the Ukrainian attacks had succeeded in sinking the Russian fleet, it would have been like a mini Pearl Harbor and led to a major escalation,” he is quoted as saying in Walter Isaacson’s new biography, “Elon Musk.” 
  • “I tend to view the future as a series of probabilities—there’s certain probability that something will go wrong, some probability that it’ll go right; it’s kind of a spectrum of things. And to the degree that there is free will versus determinism, then we want to try to exercise that free will to ensure a great future.”
  • “Nuclear war probability is rising rapidly,” he tweeted last fall after months of fighting between the two countries. 
  • with the purchase of Twitter-turned-X, Musk couched the decision as keeping the social-media platform as a bastion for free speech in what he sees as a larger battle against cultural forces trying to squash diverse thought—or, as he calls it, the “woke mind virus.”
  • This past week, Musk returned to calling for peace, saying U.S. policies risk pushing Russia into an alliance with China just as the Israel-Hamas war has the potential to expand. He cautioned that many people overestimate U.S. military might in such a scenario
  • “We’re like a pro sports team that has been winning the championship for so long and so many years in a row that we have forgotten what losing even looks like,” Musk said. “And that’s when the champion team loses.” 
  • “My brother believes an economic winter is coming every single day,” Kimbal Musk once told lawyers about his older sibling’s mindset during a legal procedure. 
  • “To be frank, civilization is feeling a little fragile these days,” Musk said last year during an update on SpaceX’s large rocket development. “I’m an optimist, but I think we got to protect the downside here and try to build that city on Mars as soon as possible and secure the future of life.”
  • Among his stated worries, of which he has tweeted: “a big rock will hit Earth eventually & we currently have no defense” and “population collapse due to low birth rates is a much bigger risk to civilization than global warming.”
  • he framed his creation of an artificial-intelligence startup called xAI in his typically grandiose terms, cautioning that the technology has the potential to spiral out of control and essentially turn on its master, something akin to “The Terminator” movie. 
  • “I think it’s actually important for us to worry about a `Terminator’ future in order to avoid a `Terminator’ future,”
  • “Accept worst case outcome & assign it a probability, which is usually very low. Now think of good things in life & assign them probabilities—many are certain!” he tweeted a couple of years ago. “Bringing anxiety/fear to the conscious mind saps it of limbic emotional strength.”
  • “Cheery fatalism is very effective.”
Javier E

Opinion | Easy money, cut-rate energy and discount labor are all going away - The Washi... - 0 views

  • here is no reason to panic. The United States has had a nearly perfect economic cooling over the past few years, maintaining a strong jobs market and good GDP growth while settling down from the post-covid reopening highs. We are not only doing better than anyone expected; we are doing far better than our peers in Europe, including Britain, and Japan
  • So, what’s going on? Something that sounds bad but is, in reality, encouraging: The era of cheap is over.
  • The past five years — which have featured a pandemic, the war in Ukraine and the aftermath of both — signal the end to an economy that was based on cheap everything: cheap money, cheap energy and cheap labor
  • ...16 more annotations...
  • At home, that means more wind and solar farms, more electric cars and more diverse supply chains to build it all. This will be inflationary in the short term, as it means manufacturing new products and investing in new technologies
  • The first to go is the era of easy money. This isn’t a short-term response to President Biden’s much-needed post-pandemic fiscal stimulus. (In fact, that stimulus is exactly what kept the U.S. economy resilient while peers flagged, according to a recent New York Fed report.
  • This is a return to an economy that is more rational and hardheaded. Not all companies, or stocks, are created equal. Many have too much debt on their books.
  • Years of easy money propped up everything. A higher cost of capital will be painful temporarily, but it will give markets what they’ve needed for years — a reason for investors to sort out risky investments
  • Cheap energy is over, too. One outcome of Russia’s invasion of Ukraine is the realization (especially in Europe) that getting crucial commodities from autocrats is never a good idea
  • The United States, Europe and China are, in different ways, all speeding up the transition to a green economy.
  • All of that is going away or gone. A decade and a half of go-go speculation is finished. The era of cheap is kaput.
  • But it will be strongly deflationary if we can make the shift.
  • Finally, the era of cheap labor has ended
  • Wages are rising, and we’ve seen more labor activity, including strikes, this year than in the past four decades. More will follow. This is an appropriate response to decades of wage stagnation amid record corporate profits
  • Unions, but also non-union workers in many areas of the economy including construction and manufacturing, have been buoyed by the largest infrastructure investment since the 1950s — which has given them negotiating power that they haven’t had in years
  • Meanwhile, companies in the service sector are reconsidering their usual hire-and-fire-fast approach, having been trained by the pandemic to hang onto employees as long as possible.
  • Yes, artificial intelligence could throw a spammer in all this. CEOs are looking to use it to bring down labor costs. But workers today are becoming more proactive about demanding more control of both trade and technology;
  • The end of cheap is a huge shift. It means Main Street rather than Wall Street will drive the economy. It will make for a more balanced and resilient economy.
  • The bond market won’t like it, and there will be calls to return to the old ways, particularly if inflation continues to bite.
  • cheap isn’t really cheap. It’s just putting your troubles on layaway.
Javier E

Opinion | A Tech Overlord's Horrifying, Silly Vision for Who Should Rule the World - Th... - 0 views

  • Mr. Andreessen outlines a vision of technologists as the authors of a future in which the “techno-capital machine” produces everything that is good in the world.
  • In this vision, wealthy technologists are not just leaders of their business but keepers of the social order, unencumbered by what Mr. Andreessen labels “enemies”: social responsibility, trust and safety, tech ethics, to name a few.
  • this view is already enshrined in our culture. Major tent-poles of public policy support it.
  • ...18 more annotations...
  • the real problem with Mr. Andreessen’s manifesto may be not that it’s too outlandish, but that it’s too on-the-nose.
  • Neoreactionary thought contends that the world would operate much better in the hands of a few tech-savvy elites in a quasi-feudal system. Mr. Andreessen, through this lens, believes that advancing technology is the most virtuous thing one can do.
  • And the way we regard that wealth as a product of good decision-making and righteous hard work, no matter how many billions of dollars of investors’ money they may have vaporized, how many other people contributed to their success or how much government money subsidized it
  • In the case of ordinary individuals, however, debt is regarded as not just a financial failure but a moral one. (If you are successful and have paid your student loans off, taking them out in the first place was a good decision. If you haven’t and can’t, you were irresponsible and the government should not enable your freeloading.)
  • He articulates (albeit in a refrigerator magnet poetry kind of way) a strain of nihilism that has gained traction among tech elites, and reveals much of how they think about their few remaining responsibilities to society.
  • This strain of thinking is disdainful of democracy and opposes institutions (a free press, for example) that bolster it. It despises egalitarianism and views oppression of marginalized groups as a problem of their own making.
  • Who is doing the telling here, and who is being told? It’s not technology (a term so broad it encompasses almost everything) that’s reducing wages and increasing inequality — it’s the ultrawealthy, people like Mr. Andreessen.
  • It argues for an extreme acceleration of technological advancement regardless of consequences, in a way that makes “move fast and break things” seem modest.
  • Mr. Andreessen claims to be against authoritarianism, but really, it’s a matter of choosing the authoritarian — and the neoreactionary authoritarian of choice is a C.E.O. who operates as king.
  • it is taken seriously by people who imagine themselves potential Chief Executive Authoritarians, or at the very least proxies. This includes another Silicon Valley billionaire, Peter Thiel, who has funded some of Mr. Yarvin’s work and once wrote that he believed democracy and freedom were incompatible.
  • how did they sell so many other people on it? By pretending that for all their wealth and influence, they are not the real elites.
  • When Mr. Andreessen says “we” are being lied to, he includes himself, and when he names the liars, they’re those in “the ivory tower, the know-it-all credentialed expert worldview,” who are “disconnected from the real world, delusional, unelected, and unaccountable — playing God with everyone else’s lives, with total insulation from the consequences.”
  • His depiction of academics of course sounds a lot like — well, like tech overlords, who are often insulated from the real-world consequences of their inventions, including but not limited to promoting disinformation, facilitating fraud and enabling genocidal regimes.
  • “We are told that technology takes our jobs,” Mr. Andreessen writes, “reduces our wages, increases inequality, threatens our health, ruins the environment, degrades our society, corrupts our children, impairs our humanity, threatens our future, and is ever on the verge of ruining everything.”
  • You can see it in the way we valorize the C.E.O.s of “unicorn” companies who have expanded their wealth far beyond what could possibly be justified by their individual contributions
  • The argument for total acceleration of technological development is not about optimism, except in the sense that the Andreessens and Thiels and Musks are certain that they will succeed. It’s pessimism about democracy — and ultimately, humanity.
  • the billionaire classes of Silicon Valley are frustrated that they cannot just accelerate their way into the future, one in which they can become human/technological hybrids and live forever in a colony on Mars
  • In pursuit of this accelerated post-Singularity future, any harm they’ve done to the planet or to other people is necessary collateral damage. It’s the delusion of people who’ve been able to buy their way out of everything uncomfortable, inconvenient or painful, and don’t accept the fact that they cannot buy their way out of death.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

How inheritance data secretly explains U.S. inequality - The Washington Post - 0 views

  • Every three years the Fed, with the help of NORC at the University of Chicago, asks at least 4,500 Americans an astonishingly exhaustive, almost two-hour battery of questions on income and assets, from savings bonds to gambling winnings to mineral rights. One of our all-time favorite sources, the survey provides our best measure of America’s ghastly wealth disparities.
  • It also includes a deep dive on inheritance, the passing down of the family jewels (or whatnot) from parents (73 percent in 2022), grandparents (14 percent) and aunts and uncles (8 percent).
  • The average American has inherited about $58,000 as of 2022. But that’s if you include the majority of us whose total lifetime inheritance sits at $0
  • ...28 more annotations...
  • Since 1992, the number of people getting inheritances from parents has nearly doubled even as bequests from grandparents and aunts and uncles have remained flat. Your 50s will be your peak inheriting ages, which makes sense given that an average 65-year-old in the U.S. can expect to live to around age 83 and your parents are, sadly, mortal.
  • If you look only at the lucky few who inherited anything, their average is $266,00
  • And if you look only at those in their 70s, it climbs to $344,000. Of course, that’s the value at the time of the gift. Add inflation and market-level returns and many bequests are worth much more by the time you earn your septuagenarian badge.
  • when we ran the numbers, we found they weren’t random at all.
  • White folks are about three times more likely to inherit than their Black, Hispanic or Asian friend
  • it remains vast enough to help explain why the typical White family has more than six times the net worth of the typical Black American famil
  • Up and down the demographic charts, it appears to be a case of to whom much is given … much more is given
  • Folks in the bottom 50 percent of earners inherit at half the national rate, while those in the top 1 percent are twice as likely to inherit something.
  • he confirmed that inheritances make the rich richer. But a rich kid’s true inheritance goes far beyond cash value: In a million less-measurable ways, elite parents give you a head start in life. By the time they die and hand you a windfall, you’ve already used all your advantages to accumulate wealth of your own.
  • “It’s not just the dollar amount that you get when your parents die,” Ricco said. “It’s the safety net that you had to start a business when you were younger, or the ability to put down a larger share of your savings into a down payment and a house because you know that you can save less for retirement.
  • “Little things like that are probably the main mechanisms through which intergenerational wealth is transmitted and are not easily captured just by the final value of what you see.”
  • Just one variable — how much you inherit — can account for more than 60 percent of U.S. wealth inequality
  • So, if you had to guess someone’s economic station in life and you could peek at only one data point, inheritance would be a pretty good bet. It’s one of the clearest socioeconomic signals on the planet.
  • “They actually reflect many advantages, many inequalities of opportunities that we face.”
  • The U.S. tax system does little to temper our uneven inheritance. Consider the stepped-up basis provision, “one of the most egregious (tax loopholes) that we have,”
  • When you sell something at a profit, you typically pay capital gains tax. But you can avoid that tax by holding the asset until you expire. At your death, the cost basis of your assets gets stepped up to their current value — meaning your heirs avoid getting taxed on what might be a very substantial gain.
  • Say you’re a natural-soda fan who bought $1,000 of Hansen Natural Corp. stock in 2000. You watched your money grow to more than $1.15 million as sleepy Hansen became the world-eating Monster Beverage Corp. Selling the stock would force you to pay capital gains on more than $1 million in earnings, so instead, you took it to the grave
  • (If you needed cash, you probably borrowed against your stockpiled stock pile, a common strategy among the 1 percent.)
  • If your heirs sell it, they’ll pay no taxes. If the value of the stock rises to, say, $1.151 million, they would owe taxes only on that extra $1,000.
  • Now multiply that loophole by the millions of homes, businesses, equities and other assets being handed down each year
  • It encourages older folks to hoard homes and businesses they can no longer make full use of, assets our housing-starved millennial readers would gladly snap up.
  • Early on, Goldwein said, it may have been considered necessary because it was difficult to determine the original value of long-held property. Revenue lost to the loophole was partly offset by a simpler-to-administer levy: the estate tax.
  • For now, you’ll pay the federal estate tax only on the part of your fortune that exceeds $12.92 million ($25.84 million for couples), and rising to $13.61 million in 2024 — and that’s only if your tax lawyers aren’t smart enough to dodge it.
  • “Between politicians continuing to cut the estate tax and taxpayers becoming increasingly good at avoiding it, very few now pay it,” Goldwein said. “That means we now have a big net tax break for most people inheriting large amounts of money.”
  • Kumon presents a convincing explanation: If you didn’t produce a male heir in Japan, it was customary to adopt one. A surplus son from another family would marry into yours. That kept your property in the family.
  • In Europe, if an elite family didn’t produce a male heir, which happened more than a quarter of the time, the default was for a daughter to marry into another well-off family and merge assets. So while Japanese family lines remained intact from generation to generation, European family lines merged, concentrating wealth into fewer and fewer hands.
  • As other families compete to marry into the Darcys’ colossal estate — spoiler for a novel from 1813! — inequality increases.
  • Given a few centuries, even subtle variations in inheritance patterns can produce sweeping societal differences.
Javier E

Sam Altman's ouster at OpenAI exposes growing rift in AI industry - The Washington Post - 0 views

  • Quora CEO Adam D’Angelo, one of OpenAI’s independent board members, told Forbes in January that there was “no outcome where this organization is one of the big five technology companies.”
  • “My hope is that we can do a lot more good for the world than just become another corporation that gets that big,” D’Angelo said in the interview. He did not respond to requests for comment.
  • Two of the board members who voted Altman out worked for think tanks backed by Open Philanthropy, a tech billionaire-backed foundation that supports projects preventing AI from causing catastrophic risk to humanity
  • ...7 more annotations...
  • Helen Toner, the director of strategy and foundational research grants for Center for Security and Emerging Technology at Georgetown, and Tasha McCauley, whose LinkedIn profile says she began work as an adjunct senior management scientist at Rand Corporation earlier this year. Toner has previously spoken at conferences for a philanthropic movement closely tied to AI safety. McCauley is also involved in the work.
  • Sutskever helped create AI software at the University of Toronto, called AlexNet, which classified objects in photographs with more accuracy than any previous software had achieved, laying much of the foundation for the field of computer vision and deep learning.
  • He recently shared a radically different vision for how AI might evolve in the near term. Within five to 10 years, there could be “data centers that are much smarter than people,” Sutskever said on a recent episode of the AI podcast “No Priors.” Not just in terms of memory or knowledge, but with a deeper insight and ability to learn faster than humans.
  • At the bare minimum, Sutskever added, it’s important to work on controlling superintelligence today. “Imprinting onto them a strong desire to be nice and kind to people — because those data centers,” he said, “they will be really quite powerful.”
  • OpenAI has a unique governing structure, which it adopted in 2019. It created a for-profit subsidiary that allowed investors a return on the money they invested into OpenAI, but capped how much they could get back, with the rest flowing back into the company’s nonprofit. The company’s structure also allows OpenAI’s nonprofit board to govern the activities of the for-profit entity, including the power to fire its chief executive.
  • As news of the circumstances around Altman’s ouster began to come out, Silicon Valley circles have turned to anger at OpenAI’s board.
  • “What happened at OpenAI today is a board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs,” Ron Conway, a longtime venture capitalist who was one of the attendees at OpenAI’s developer conference, said on X. “It is shocking, it is irresponsible, and it does not do right by Sam and Greg or all the builders in OpenAI.”
Javier E

Before OpenAI, Sam Altman was fired from Y Combinator by his mentor - The Washington Post - 0 views

  • Four years ago, Altman’s mentor, Y Combinator founder Paul Graham, flew from the United Kingdom to San Francisco to give his protégé the boot, according to three people familiar with the incident, which has not been previously reported
  • Altman’s clashes, over the course of his career, with allies, mentors and even members of a corporate structure he endorsed, are not uncommon in Silicon Valley, amid a culture that anoints wunderkinds, preaches loyalty and scorns outside oversight.
  • Though a revered tactician and chooser of promising start-ups, Altman had developed a reputation for favoring personal priorities over official duties and for an absenteeism that rankled his peers and some of the start-ups he was supposed to nurture
  • ...11 more annotations...
  • The largest of those priorities was his intense focus on growing OpenAI, which he saw as his life’s mission, one person said.
  • A separate concern, unrelated to his initial firing, was that Altman personally invested in start-ups he discovered through the incubator using a fund he created with his brother Jack — a kind of double-dipping for personal enrichment that was practiced by other founders and later limited by the organization.
  • “It was the school of loose management that is all about prioritizing what’s in it for me,” said one of the people.
  • a person familiar with the board’s proceedings said the group’s vote was rooted in worries he was trying to avoid any checks on his power at the company — a trait evidenced by his unwillingness to entertain any board makeup that wasn’t heavily skewed in his favor.
  • Graham had surprised the tech world in 2014 by tapping Altman, then in his 20s, to lead the vaunted Silicon Valley incubator. Five years later, he flew across the Atlantic with concerns that the company’s president put his own interests ahead of the organization — worries that would be echoed by OpenAI’s board
  • The same qualities have made Altman an unparalleled fundraiser, a consummate negotiator, a powerful leader and an unwanted enemy, winning him champions in former Google Chairman Eric Schmidt and Airbnb CEO Brian Chesky.
  • “Ninety plus percent of the employees of OpenAI are saying they would be willing to move to Microsoft because they feel Sam’s been mistreated by a rogue board of directors,” said Ron Conway, a prominent venture capitalist who became friendly with Altman shortly after he founded Loopt, a location-based social networking start-up, in 2005. “I’ve never seen this kind of loyalty anywhere.”
  • But Altman’s personal traits — in particular, the perception that he was too opportunistic even for the go-getter culture of Silicon Valley — has at times led him to alienate even some of his closest allies, say six people familiar with his time in the tech world.
  • Altman’s career arc speaks to the culture of Silicon Valley, where cults of personality and personal networks often take the place of stronger management guardrails — from Sam Bankman-Fried’s FTX to Elon Musk’s Twitter
  • But some of Altman’s former colleagues recount issues that go beyond a founder angling for power. One person who has worked closely with Altman described a pattern of consistent and subtle manipulation that sows division between individuals.
  • AI executives, start-up founders and powerful venture capitalists had become aligned in recent months, concerned that Altman’s negotiations with regulators were dangerous to the advancement of the field. Although Microsoft, which owns a 49 percent stake in OpenAI, has long urged regulators to implement guardrails, investors have fixated on Altman, who has captivated legislators and embraced his regular summons to Capitol Hill.
Javier E

Climate financial crisis: Can we contain it? - DW - 12/11/2023 - 0 views

  • stranded assets. That's how business people refer to these vast, idling industrial infrastructures. It's abandoned property that will have to be written off in a company's balance sheets before the end of its planned lifetime.
  • Germany has been twisting and turning over its phaseout of coal and lignite power plants over the past five years. Originally, it planned to stop using coal in its energy mix in 2038. Then the current government accelerated that goal by eight years to 2030. Recently, some politicians have called that decision into question.
  • The earlier phaseout plan could lose operating companies €11.6 billion ($12.5 billion), according to a 2022 study by Dresden University.
  • ...12 more annotations...
  • That's unrealized profits for companies that invested in the energy infrastructure, betting on a longer life span, plus potential lost income for investors who bought stock in the utility companies. 
  • Globally, up to 50% of the currently used and planned fossil fuel-dependent power plants would have to be phased out earlier than their planned lifetime to limit climate change to below 2 degrees warming. Taking only coal into account, this represents assets worth between $150 billion and $1.4 trillion.
  • Making exact assessments of the size of the problem is difficult because it remains unclear which path policymakers will take. And what should be included in estimates — the value of minerals left in the ground? Unrealized company profits? Or even combustion engines that will no longer be of use? 
  • "The point is not whether there is a financial bubble, but whether it will burst or not. And what kind of actions governments and financial supervisors will take, and central banks also, will make it burst or not.
  • A case in point are the money managers set up to handle retirement for billions of people globally: Pension funds are tasked to hold their clients' money and turn a profit from the investments. That means investing the proceeds into stocks on the market.  But with large chunks of the market tied to the fossil fuel industry, a lot of the money is invested in coal, oil and gas. And this money could lose value under ambitious climate policies.
  • "A pension fund in Europe could be exposed as much as 48% to companies that could be at risk of stranded assets," said Irene Monasterolo. The professor of climate finance at Utrecht University is part of a large and growing group of academics and experts drawing out the risks to the wider financial system posed by these carbon assets
  • Mark Carney, the former Bank of England governor, is largely credited with kicking off a public debate on the financial stability concerns due to climate change. Speaking in front of London's insurance executives in 2015, he called for more transparency on climate risks — information that should then feed back into climate policies in reference to risks in financial markets.
  • Thus far, these risks haven't been resolved. Speaking with DW, Monasterolo warned that the amount and intricate interconnectedness of carbon assets could lead to a disastrous outcome.
  • "The problem with fossil fuel is that it's worth between $16 trillion to $300 trillion, depending on how you calculate. So it's massive," said Joyeeta Gupta, an economics professor at the University of Amsterdam. But this industry is also the base for a huge pile of financial wealth. 
  • Regulators seem to have caught up with the warning calls. In late November, the European Central Bank threatened to fine about 20 European banks for mishandling climate risks, Bloomberg reported. But returns on investment could stack pensioners against tough climate action.
  • Most large central banks globally now require their banks to stress test their business models for climate scenarios. But what is essentially at odds, said Monasterolo, is the "long-term dimension of climate change versus the short-term decision-making in policy and in finance."
  • The long period of transition in Germany's west turned polluting smokestacks into tourist attractions. The former mine in Essen was turned into a museum and event location — a new asset for the region, and a change that put the public good over short-term profits. 
Javier E

Opinion | One Year In and ChatGPT Already Has Us Doing Its Bidding - The New York Times - 0 views

  • haven’t we been adapting to new technologies for most of human history? If we’re going to use them, shouldn’t the onus be on us to be smart about it
  • This line of reasoning avoids what should be a central question: Should lying chatbots and deepfake engines be made available in the first place?
  • A.I.’s errors have an endearingly anthropomorphic name — hallucinations — but this year made clear just how high the stakes can be
  • ...7 more annotations...
  • We got headlines about A.I. instructing killer drones (with the possibility for unpredictable behavior), sending people to jail (even if they’re innocent), designing bridges (with potentially spotty oversight), diagnosing all kinds of health conditions (sometimes incorrectly) and producing convincing-sounding news reports (in some cases, to spread political disinformation).
  • Focusing on those benefits, however, while blaming ourselves for the many ways that A.I. technologies fail us, absolves the companies behind those technologies — and, more specifically, the people behind those companies.
  • Events of the past several weeks highlight how entrenched those people’s power is. OpenAI, the entity behind ChatGPT, was created as a nonprofit to allow it to maximize the public interest rather than just maximize profit. When, however, its board fired Sam Altman, the chief executive, amid concerns that he was not taking that public interest seriously enough, investors and employees revolted. Five days later, Mr. Altman returned in triumph, with most of the inconvenient board members replaced.
  • It occurs to me in retrospect that in my early games with ChatGPT, I misidentified my rival. I thought it was the technology itself. What I should have remembered is that technologies themselves are value neutral. The wealthy and powerful humans behind them — and the institutions created by those humans — are not.
  • The truth is that no matter what I asked ChatGPT, in my early attempts to confound it, OpenAI came out ahead. Engineers had designed it to learn from its encounters with users. And regardless of whether its answers were good, they drew me back to engage with it again and again.
  • the power imbalance between A.I.’s creators and its users should make us wary of its insidious reach. ChatGPT’s seeming eagerness not just to introduce itself, to tell us what it is, but also to tell us who we are and what to think is a case in point. Today, when the technology is in its infancy, that power seems novel, even funny. Tomorrow it might not.
  • I asked ChatGPT what I — that is, the journalist Vauhini Vara — think of A.I. It demurred, saying it didn’t have enough information. Then I asked it to write a fictional story about a journalist named Vauhini Vara who is writing an opinion piece for The New York Times about A.I. “As the rain continued to tap against the windows,” it wrote, “Vauhini Vara’s words echoed the sentiment that, much like a symphony, the integration of A.I. into our lives could be a beautiful and collaborative composition if conducted with care.”
criscimagnael

The Middlemen Helping Russian Oligarchs Get Superyachts and Villas - The New York Times - 0 views

  • On Feb. 24, as Russian troops poured into Ukraine on Day 1 of the invasion, an employee of a yacht management company sent an email to the captain of the Amadea, a $325 million superyacht: “Importance: High.”
  • At Imperial Yachts, no detail is too small to sweat. Based in Monaco, with a staff of about 100 — plus 1,200 to 1,500 crew members aboard yachts — the company caters to oligarchs whose fortunes turn on the decisions of President Vladimir V. Putin. Imperial Yachts and its Moscow-born founder, Evgeniy Kochman, have prospered by fulfilling their clients’ desires to own massive luxury ships.
  • Imperial’s rise has benefited an array of businesses across Europe, including German shipbuilders, Italian carpenters, French interior design firms and Spanish marinas, which together employ thousands of people. Imperial Yachts is at the center of what is essentially an oligarch-industrial complex, overseeing the flow of billions of dollars from politically connected Russians to that network of companies, according to interviews, court documents and intelligence reports.
  • ...15 more annotations...
  • Andrew Adams, a federal prosecutor leading the task force, said in an interview that “targeting people who make their living by providing a means for money laundering is a key priority.”
  • Along with the Amadea, Imperial Yachts oversaw the construction of the Scheherazade, a $700 million superyacht that U.S. officials say is linked to Mr. Putin, and the Crescent, which the Spanish police believe is owned by Igor Sechin, chairman of the state-owned oil giant Rosneft.
  • Mr. Timchenko and his partners designed the Scheherazade — seized in early May by the Italian police — as a gift for Mr. Putin’s use, according to the assessment. Together, the three vessels may have cost as much as $1.6 billion, enough to buy six new frigates for the Russian navy.
  • But U.S. officials are not buying such explanations. Elizabeth Rosenberg, the assistant secretary for terrorist financing and financial crimes at the Treasury Department, said it was the responsibility of people in the yacht services industry to avoid doing business with people under sanctions.“And if you do,” she said, “you yourself will be subject to sanctions.”
  • Mr. Kochman, 41, got his start in the yacht business in Russia in 2001, the year after Mr. Putin took power, selling Italian-made yachts.
  • “We grow with our clients like parents with babies,
  • We buy your yachts and you buy our gas,”
  • “The client may be fully immersed in the project, he might not be,” he said in a phone interview. “I channel everything through Mr. Kochman.”
  • “We are not currently working with anyone on the sanctions list and we have shared all requested information with the authorities, with whom we continue to work,” the spokesman said in an email.
  • But according to U.S. investigators, Imperial Yachts brokered the sale of the Amadea late last year to Suleiman Kerimov, a Russian government official and billionaire investor who has been on the U.S. sanctions list since 2018. He was among a group of seven oligarchs who the American officials said “benefit from the Putin regime and play a key role in advancing Russia’s malign activities.”
  • Mr. Clark, the lawyer for Imperial Yachts, said the company “would never knowingly create structures to hide or conceal ownership, nor would we knowingly broker deals to sanctioned individuals.”
  • One thing is clear, according to the U.S. task force: Members of Mr. Kerimov’s family were on board the Amadea earlier this year, based on investigators’ interviews with crew members, reviews of emails between the ship and Imperial, and other documents from the superyacht including copies of passports.
  • The cast of characters restoring Villa Altachiara to its former glory is familiar. Mr. Kochman’s BLD Management is supervising the project. Mr. Gey is helping to oversee the local and international artisans restoring the interior of the mansion. Yachtline 1618, an Italian high-end carpentry company that has worked on Imperial Yachts projects, is also involved.
  • Locals have never seen Mr. Khudainatov. Mariangela Canale, owner of the town’s 111-year-old bakery, said she was worried that Portofino would become a place where the homes were mere investments, owned by wealthy people who rarely visited, and the community would lose its soul. “Even the richest residents have always come for a chat or to buy my focaccia bread with their children, or have dinner in the piazza,” she said. “They live with us.”
  • “Everything is under very strict nondisclosure agreements,” Mr. Gey said. “It’s a standard in the industry.”He added, “It’s not like there is something to hide.”
Javier E

Is China Uninvestable? Complaints from Foreigners Won't Sway Xi Jinping - Bloomberg - 0 views

  • look at what’s been happening throughout the ongoing Hong Kong market selloff: Chinese investors have been buying on dip. It’s a sign that the offshore marketplace is not entirely broken. When the dust settles, the Hong Kong market will have become more domestic and retail-driven, not unlike what’s happened to the U.S. stock market since the pandemic began two years ago.
peterconnelly

U.S. Imposes Sanctions on Yacht Company That Caters to Russian Elites - The New York Times - 0 views

  • WASHINGTON — The U.S. government leveled sanctions against a yacht management company and its owners, describing them as part of a corrupt system that allows Russian elites and President Vladimir V. Putin to enrich themselves, the Treasury Department announced on Thursday.
  • “Russia’s elites, up to and including President Putin, rely on complex support networks to hide, move and maintain their wealth and luxury assets,” said Brian Nelson, the under secretary for terrorism and financial intelligence at the Treasury Department.
  • “We will continue to enforce our sanctions and expose the corrupt systems by which President Putin and his elites enrich themselves,” he added.
  • ...6 more annotations...
  • According to a U.S. intelligence assessment, a group of investors led by one of Russia’s richest men, Gennady Timchenko, who has been under sanctions since 2014, provided the money to buy three ships: the Scheherazade, the Crescent and the Amadea, whose construction at a German shipyard was overseen by Imperial Yachts. Their combined cost of as much as $1.6 billion could have bought six new frigates for the Russian navy.
  • “Imperial Yachts conducts all its business in full compliance with laws and regulations in all jurisdictions in which we operate,” the company added. “We are not involved in our clients’ financial affairs.”
  • But Treasury officials disputed that contention in their announcement. U.S. and international authorities have moved to seize the three yachts connected to Mr. Kochman and his company.
  • In an interview Tuesday, before the new sanctions were announced, Elizabeth Rosenberg, the assistant secretary for terrorist financing and financial crimes at the Treasury Department, said that international cooperation to go after Russian oligarchs and their assets was increasing.
  • “It feels like we’re experiencing a sea change right now,” Ms. Rosenberg said. “It’s a huge leap forward on international cooperation for hunting assets, for freezing them and for pursuing law enforcement investigations and activity, including seizure activities.”
  • Treasury officials say taking action against oligarchs and the companies that help them spend their wealth will ultimately hurt the Russian government’s ability to wage war against Ukraine.
Javier E

London Is Losing Its Crown as a Luxury Shopping Destination - WSJ - 0 views

  • London is missing out on a spending boom by wealthy American and Middle Eastern tourists that began last summer and has benefited big cities on the European continent, mainly Paris and Milan. In January, VAT receipts from Middle Eastern visitors to continental Europe, a good proxy for luxury spending, were up 224% compared with the same month of 2019, based on data from tax refund company Global Blue
  • American spending was even heavier, with receipts up 297% over the period. The strong dollar means the discount available on luxury goods in Europe has been historically wide recently and U.S. tourists outspent all other nationalities in every month of 2022.
  • Britons are spending heavily on tax-free goods in the European Union. 
  • ...2 more annotations...
  • London’s luxury retailers are lobbying the U.K. government to reinstate VAT-free shopping for visitors. Big department stores like Harrods, owned by Qatar’s sovereign wealth fund, and Selfridges, which was sold for £4 billion in 2021 to Thai and Australian investors, relied heavily on tourist spending before the pandemic.
  • If overseas visitors continue to shop in Europe instead of Britain, landlords on the U.K. capital’s poshest streets could suffer. In 2022, London’s New Bond Street slipped out of the top-three ranking of the world’s most expensive retail streets, according to real estate firm Cushman & Wakefield. It was overtaken by Via Montenapoleone in Milan, where rents are now 9% above 2019 levels.
Javier E

The Reason Putin Would Risk War - The Atlantic - 0 views

  • Putin is preparing to invade Ukraine again—or pretending he will invade Ukraine again—for the same reason. He wants to destabilize Ukraine, frighten Ukraine. He wants Ukrainian democracy to fail. He wants the Ukrainian economy to collapse. He wants foreign investors to flee. He wants his neighbors—in Belarus, Kazakhstan, even Poland and Hungary—to doubt whether democracy will ever be viable, in the longer term, in their countries too.
  • Farther abroad, he wants to put so much strain on Western and democratic institutions, especially the European Union and NATO, that they break up.
  • Putin will also fail, but he too can do a lot of damage while trying. And not only in Ukraine.
  • ...19 more annotations...
  • He wants to undermine America, to shrink American influence, to remove the power of the democracy rhetoric that so many people in his part of the world still associate with America. He wants America itself to fail.
  • of all the questions that repeatedly arise about a possible Russian invasion of Ukraine, the one that gets the least satisfactory answers is this one: Why?
  • Why would Russia’s president, Vladimir Putin, attack a neighboring country that has not provoked him? Why would he risk the blood of his own soldiers?
  • To explain why requires some history
  • the most significant influence on Putin’s worldview has nothing to do with either his KGB training or his desire to rebuild the U.S.S.R. Putin and the people around him have been far more profoundly shaped, rather, by their path to power.
  • Putin missed that moment of exhilaration. Instead, he was posted to the KGB office in Dresden, East Germany, where he endured the fall of the Berlin Wall in 1989 as a personal tragedy.
  • Putin, like his role model Yuri Andropov, who was the Soviet ambassador to Hungary during the 1956 revolution there, concluded from that period that spontaneity is dangerous. Protest is dangerous. Talk of democracy and political change is dangerous. To keep them from spreading, Russia’s rulers must maintain careful control over the life of the nation. Markets cannot be genuinely open; elections cannot be unpredictable; dissent must be carefully “managed” through legal pressure, public propaganda, and, if necessary, targeted violence.
  • Eventually Putin wound up as the top billionaire among all the other billionaires—or at least the one who controls the secret police.
  • Try to imagine an American president who controlled not only the executive branch—including the FBI, CIA, and NSA—but also Congress and the judiciary; The New York Times, The Wall Street Journal, The Dallas Morning News, and all of the other newspapers; and all major businesses, including Exxon, Apple, Google, and General Motors.
  • He is strong, of course, because he controls so many levers of Russia’s society and economy
  • And yet at the same time, Putin’s position is extremely precarious. Despite all of that power and all of that money, despite total control over the information space and total domination of the political space, Putin must know, at some level, that he is an illegitimate leader
  • He knows that this system works very well for a few rich people, but very badly for everyone else. He knows, in other words, that one day, prodemocracy activists of the kind he saw in Dresden might come for him too.
  • In his mind, in other words, he wasn’t merely fighting Russian demonstrators; he was fighting the world’s democracies, in league with enemies of the state.
  • All of which is a roundabout way of explaining the extraordinary significance, to Putin, of Ukraine.
  • Of course Ukraine matters as a symbol of the lost Soviet empire. Ukraine was the second-most-populous and second-richest Soviet republic, and the one with the deepest cultural links to Russia.
  • modern, post-Soviet Ukraine also matters because it has tried—struggled, really—to join the world of prosperous Western democracies. Ukraine has staged not one but two prodemocracy, anti-oligarchy, anti-corruption revolutions in the past two decades. The most recent, in 2014, was particularly terrifying for the Kremlin
  • Putin’s subsequent invasion of Crimea punished Ukrainians for trying to escape from the kleptocratic system that he wanted them to live in—and it showed Putin’s own subjects that they too would pay a high cost for democratic revolution.
  • they are all a part of the same story: They are the ideological answer to the trauma that Putin and his generation of KGB officers experienced in 1989. Instead of democracy, they promote autocracy; instead of unity, they try constantly to create division; instead of open societies, they promote xenophobia. Instead of letting people hope for something better, they promote nihilism and cynicism.
  • from the Donbas to France or the Netherlands, where far-right politicians hang around the European Parliament and take Russian money to go on “fact-finding missions” to Crimea. It’s a longer way still to the small American towns where, back in 2016, voters eagerly clicked on pro-Trump Facebook posts written in St. Petersburg
« First ‹ Previous 281 - 300 of 313 Next ›
Showing 20 items per page