Skip to main content

Home/ History Readings/ Group items tagged server

Rss Feed Group items tagged

Javier E

No, America is Not Experiencing a Version of China's Cultural Revolution - by Nicholas ... - 0 views

  • The first institution Maoists captured was not the academy, it was the state. The seeds of the Cultural Revolution were not in the academy, but in the perceived weakness of the communist party in China, and Mao’s position within the party, after the failures of the Great Leap Forward. Maoists took over the state first, and 17 years later launched a campaign to force cultural change in the academy and elsewhere.
  • Cultural power, and related concepts like “privilege,” aren’t nothing, but they’re vaguer and less impactful than the state, which can credibility threaten, authorize, excuse, and utilize force.
  • State-backed violence made the Cultural Revolution, and if you think the social justice movement is similar, you misunderstand it.
  • ...59 more annotations...
  • Terrorism, public health, and police violence are all life-and-death issues, and all involve the state, so they’re more consequential than the criticism, shunning, and loss of professional opportunities associated with cancel culture. But that doesn’t mean the latter isn’t a problem.
  • We can, and should, care about more than one thing at a time, and many things that aren’t the worst problem deserve attention.
  • Nevertheless, it’s important to assess problems accurately.
  • Michael Hobbes calls all this worrying about wokeness a “moral panic.” That’s a term some use online to wave away serious concerns, but Hobbes uses it the way sociologist Stanley Cohen did in the 1970s, as a phenomenon where something becomes “defined as a threat to societal values and interests” based on media accounts that “exaggerate the seriousness, extent, typicality and/or inevitability of harm.”
  • The point here is not that stranger abductions never happened, but that they didn’t happen nearly as much as the media, concerned parents, and lawmakers thought. And because stranger kidnappings were not a national crisis, but treated as one, the “solution” made things worse.
  • Along similar lines, Hobbes argues that anti-woke alarm-bell-ringing relies on a relatively small number of oft-repeated anecdotes. Some don’t stand up to scrutiny, and some of those that do are low-stakes. The resulting moral panic fuels, among other things, a wave of red state legislation aimed at banning “critical race theory” that uses vague language and effectively cracks down on teaching about racism in American history.
  • For that, we should look to data, and here again the problem looks smaller than anti-woke liberals make it out to be
  • In the universe of cancel culture cases, I find more incidents concerning than Hobbes and fewer concerning than Young, but “this one incident wasn’t actually bad” vs. “yes it really was” doesn’t answer the question about size and scope. It doesn’t tell us what, if anything, society should do about it.
  • In Liberal Currents, Adam Gurri cites the Foundation for Individual Rights in Education (FIRE), which documented 426 “targeting incidents involving scholars at public and private American institutions of higher education” since 2015 and 492 “disinvitation attempts” since 1998
  • The organization Canceled People lists 217 cases of “cancellation” since 1991, while the National Association of Scholars (NAS) lists 194 cancellations in academia since 2004 (plus two in the 20th century).
  • Based on these numbers, Gurri concludes, “If any other problem in social life was occurring at this frequency and at this scale, we would consider it effectively solved.”
  • There are nearly 4,000 colleges and universities in the United States. U.S. News’ 2021 rankings of the best schools lists 1,452. Using that smaller number and NAS’s figure of 194 academic cancellations since 2004, the chance of a college or university experiencing a cancellation in a given year is less than 0.8 percent.
  • There are some concerning cases in the NAS database too, in which professors were fired for actions that should be covered under a basic principle of academic freedom — for example, reading aloud a Mark Twain passage that included a racial slur, even after giving students advance notice — so this isn’t a total non-issue. But the number of low stakes and relatively unobjectionable cases means the risk is lower than 0.8 percent (and it’s even lower than that, since NAS includes Canada and my denominator is ranked schools in the United States).
  • Similarly, FIRE classifies about 30 percent of the attempted disinvitations in its database as from the right. About 60 percent are from the left — the other 10 percent N/A — so if you want to argue that the left does this more, you’ve got some evidence. But still, the number of cases from the left is lower than the total. And more than half of FIRE’s attempted disinvitations did not result in anyone getting disinvited.
  • Using U.S. News’ ranked schools as the denominator, the chance of left-wing protestors trying to get a speaker disinvited at a college or university in a given year is about 0.5 percent. The chance of an actual disinvitation is less than 0.25 percent. And that’s in the entire school. To put this in perspective, my political science department alone hosts speakers most weeks of the semester.
  • Two things jump out here:
  • Bari Weiss and Anne Applebaum both cite a Cato study purporting to show this effect:
  • even if we assume these databases capture a fraction of actual instances — which would be surprising, given the media attention on this topic, but even so — the data does not show an illiberal left-wing movement in control of academia.
  • The number agreeing that the political climate prevents them from saying things they believe ranges from 42% to 77%, which is high across political views. That suggests self-censorship is, to a significant degree, a factor of the political, cultural, and technological environment, rather than caused by any particular ideology.
  • Conservatives report self-censoring more than liberals do.
  • The same study shows that the biggest increase in self-censorship from 2017 to 2020 was among strong liberals (+12), while strong conservatives increased the least (+1).
  • If this data told a story of ascendent Maoists suppressing conservative speech, it would probably be the opposite, with the left becoming more confident of expressing their views — on race, gender, etc. — while the right becomes disproportionately more fearful. Culture warriors fixate on wokeness, but when asked about the political climate, many Americans likely thought about Trumpism
  • Nevertheless, this data does show conservatives are more likely to say the political climate prevents them from expressing their beliefs. But what it doesn’t show is which beliefs or why.
  • Self-censoring can be a problem, but also not. The adage “do not discuss politics or religion in general company” goes back to at least 1879. If someone today is too scared to say “Robin DiAngelo’s conception of ‘white fragility’ does not stand up to logical scrutiny,” that’s bad. If they’re too scared to shout racial slurs at minorities, that isn’t. A lot depends on the content of the speech.
  • When I was a teenager in the 1990s, anti-gay slurs were common insults among boys, and tough-guy talk in movies. Now it’s a lot less common, one of the things pushed out of polite society, like the n-word, Holocaust denial, and sexual harassment. I think that’s a positive.
  • Another problem with the anti-woke interpretation of the Cato study is media constantly tells conservatives they’re under dire threat.
  • Fox News, including Tucker Carlson (the most-watched show on basic cable), Ben Shapiro and Dan Bongino (frequently among the most-shared on Facebook), and other right-wing outlets devote tons of coverage to cancel culture, riling up conservatives with hyperbolic claims that people are coming for them
  • Anti-woke liberals in prestigious mainstream outlets tell them it’s the Cultural Revolution
  • Then a survey asks if the political climate prevents them from saying what they believe, and, primed by media, they say yes.
  • With so many writers on the anti-woke beat, it’s not especially plausible that we’re missing many cases of transgender servers getting people canceled for using the wrong pronoun in coffee shops to the point that everyone who isn’t fully comfortable with the terminology should live in fear. By overstating the threat of cancellation and the power of woke activists, anti-woke liberals are chilling speech they aim to protect.
  • a requirement to both-sides the Holocaust is a plausible read of the legal text. It’s an unsurprising result of empowering the state to suppress ideas in an environment with bad faith culture warriors, such as Chris Rufo and James Lindsay, advocating state censorship and deliberately stoking panic to get it.
  • Texas, Florida, and other states trying to suppress unwanted ideas in both K-12 and higher ed isn’t the Cultural Revolution either — no state-sanctioned mass violence here — but it’s coming from government, making it a bigger threat to speech and academic freedom.
  • To put this in perspective, antiracist guru Ibram X. Kendi has called for an “anti-racist Constitutional amendment,” which would “make unconstitutional racial inequity over a certain threshold, as well as racist ideas by public officials,” and establish a Department of Anti-Racism to enforce it. It’s a terrible proposal that would repeal the First Amendment and get the state heavily involved in policing speech (which, even if well-intentioned, comes with serious risks of abuse).
  • It also doesn’t stand the slightest chance of happening.
  • It’s fair to characterize this article as anti-anti-woke. And I usually don’t like anti-anti- arguments, especially anti-anti-Trump (because it’s effectively pro). But in this case I’m doing it because I reject the binary.
  • American politics is often binary.
  • Culture is not. It’s an ever-changing mishmash, with a large variety of influential participants
  • There have been unmistakable changes in American culture — Western culture, really — regarding race and gender, but there are way more than two sides to that. You don’t have to be woke or anti-woke. It’s not a political campaign or a war. You can think all sorts of things, mixing and matching from these ideas and others.
  • I won’t say “this is trivial” nor “this stuff is great,” because I don’t think either. At least not if “this” means uncompromising Maoists seeking domination.
  • I think that’s bad, but it’s not especially common. It’s not fiction — I’m online a lot, I have feet in both media and academia, I’ve seen it too — but, importantly, it’s not in control
  • I think government censorship is inherently more concerning than private censorship, and that we can’t sufficiently counter the push for state idea-suppression without countering the overstated fears that rationalize it.
  • I think a lot of the private censorship problem can be addressed by executives and administrators — the ones who actually have power over businesses and universities — showing a bit of spine. Don’t fold at the first sign of protest. Take some time to look into it yourself, and make a judgment call on whether discipline is merited and necessary. Often, the activist mob will move on in a few days anyway.
  • I think that, with so much of the conversation focusing on extremes, people often miss when administrators do this.
  • I think violence is physical, and that while speech can be quite harmful, it’s better to think of these two things as categorically different than to insist harmful speech is literally violence.
  • at a baseline, treating people as equals means respecting who they say they are. The vast majority are not edge cases like a competitive athlete, but regular people trying to live their lives. Let them use the bathroom in peace.
  • I think the argument that racism and other forms of bigotry operate at a systemic or institutional, in addition to individual, level is insightful, intuitive, and empirically supported. We can improve people’s lives by taking that into account when crafting laws, policies, and practices.
  • I think identity and societal structures shape people’s lives (whether they want it to or not) but they’re far from the only factors. Treating them as the only, or even predominant, factor essentializes more than it empowers.
  • I think transgender and non-binary people have a convincing case for equality. I don’t think that points to clear answers on every question—what’s the point of gender segregated sports?
  • I think free association is an essential value too. Which inherently includes the right of disassociation.
  • I think these situations often fall into a gray area, and businesses should be able to make their own judgment calls about personnel, since companies have a reasonable interest in protecting their brand.
  • I think free speech is an essential value, not just at the legal level, but culturally as well. I think people who would scrap it, from crusading antiracists to social conservatives pining for Viktor Orban’s Hungary, have a naively utopian sense of how that would go (both in general and for them specifically). Getting the state involved in speech suppression is a bad idea.
  • I think America’s founding was a big step forward for government and individual liberty, and early America was a deeply racist, bigoted place that needed Amendments (13-15; 19), Civil Rights Acts, and landmark court cases to become a liberal democracy. I don’t think it’s hard to hold both of those in your head at the same time.
  • I think students learning the unvarnished truth about America’s racist past is good, and that teaching students they are personally responsible for the sins of the past is not.
  • I think synthesis of these cultural forces is both desirable and possible. Way more people think both that bigotry is bad and individual freedom is good than online arguments lead you to believe.
  • I don’t think the sides are as far apart as they think.
  • I think we should disaggregate cancel culture and left-wing identity politics. Cancellation should be understood as an internet phenomenon.
  • If it ever was just something the left does, it isn’t anymore.
  • I think a lot of us could agree that social media mobbing and professional media attention on minor incidents is wrong, especially as part of a campaign to get someone fired. In general, disproportionally severe social and professional sanctions is a problem, no matter the alleged cause.
  • I think most anti-woke liberals really do want to defend free speech and academic freedom. But I don’t think their panic-stoking hyperbole is helping.
criscimagnael

Hackers Bring Down Government Sites in Ukraine - The New York Times - 0 views

  • Hackers brought down dozens of Ukrainian government websites on Friday and posted a message on one saying, “Be afraid and expect the worst,” a day after a breakdown in diplomatic talks between Russia and the West intended to forestall a threatened Russian invasion of the country.
  • Diplomats and analysts have been anticipating a cyberattack on Ukraine, but proving the source of such actions is notoriously difficult.
  • A Ukrainian government agency, the Center for Strategic Communications and Information Security, which was established to counter Russian disinformation, later issued a statement more directly blaming Russia for the hack.
  • ...19 more annotations...
  • On Thursday, Russian officials said the talks had not yielded results, and one senior diplomat said they were approaching “a dead end.”
  • “Ukrainians! All your personal data was uploaded to the internet,” the message read. “All data on the computer is being destroyed. All information about you became public. Be afraid and expect the worst.”
  • The attack came within hours of the conclusion of talks between Russia and the United States and NATO that were intended to find a diplomatic resolution after Russia massed tens of thousands of troops near the border with Ukraine.
  • On Friday, the Biden administration also accused Moscow of sending saboteurs into eastern Ukraine to stage an incident that could provide Russia with a pretext for invasion.
  • Moscow has demanded sweeping security concessions, including a promise not to accept Ukraine into the NATO alliance. But the cyberattack Friday led to immediate pledges of support and closer cooperation with Ukraine from NATO and the European Union, exactly the opposite of what Russian diplomats had said they were seeking.
  • “the United States and its allies are actually saying ‘no’ to key elements of these texts,” referring to two draft treaties on security issues that Russia had proposed to NATO and the United States.
  • A Russian military spyware strain called X-Agent, or Sofacy, that Ukrainian cyber experts say was used to hack Ukraine’s Central Election Commission during a 2014 presidential election, for example, was later found in the server of the Democratic National Committee in the United States after the electoral hacking attacks in 2016.
  • Ukrainian government websites began crashing a few hours later, according to the Ukrainian Foreign Ministry, which said the cyberattack occurred overnight from Thursday to Friday.
  • By morning, the hack had crippled much of the government’s public-facing digital infrastructure, including the most widely used site for handling government services online, Diia. The smartphone app version of the program was still operating, the Ukrainska Pravda newspaper reported. Diia also has a role in Ukraine’s coronavirus response and in encouraging vaccination.
  • The websites of the president and the defense ministry remained online. Ukrainian officials said the attack targeted 70 government websites.
  • the hacking activity targeting state bodies could be a part of this psychological attack on Ukrainians.”
  • “I strongly condemn the cyberattacks on the Ukrainian Government,” Mr. Stoltenberg said in a statement, adding, “NATO & Ukraine will step up cyber cooperation & we will continue our strong political & practical support.”
  • Sophisticated cybertools have turned up in standoffs between Israel and Iran, and the United States blamed Russia for using hacking to influence the 2016 election in the United States to benefit Donald J. Trump.
  • The U.S. government has traced some of the most drastic cyberattacks of the past decade to Russian actions in Ukraine.
  • “We have not seen such a significant attack on government organizations in some time,” it said. “We suggest the current attack is tied to the recent failure of Russian negotiations on Ukraine’s future in NATO,” it added, referring to Moscow’s talks with the West.
  • The malware, known as NotPetya, had targeted a type of Ukrainian tax preparation software but apparently spun out of control, according to experts.
  • It coincided with the assassination of a Ukrainian military intelligence officer in a car bombing in Kyiv and the start of an E.U. policy granting Ukrainians visa-free travel, an example of the type of integration with the West that Russia has opposed.
  • But NotPetya spread around the world, with devastating results, illustrating the risks of collateral damage from military cyberattacks for people and businesses whose lives are increasingly conducted online, even if they live far from conflict zones
  • The total global cost is thought to be far higher
Javier E

How Could AI Destroy Humanity? - The New York Times - 0 views

  • “AI will steadily be delegated, and could — as it becomes more autonomous — usurp decision making and thinking from current humans and human-run institutions,” said Anthony Aguirre, a cosmologist at the University of California, Santa Cruz and a founder of the Future of Life Institute, the organization behind one of two open letters.
  • “At some point, it would become clear that the big machine that is running society and the economy is not really under human control, nor can it be turned off, any more than the S&P 500 could be shut down,” he said.
  • Are there signs A.I. could do this?Not quite. But researchers are transforming chatbots like ChatGPT into systems that can take actions based on the text they generate. A project called AutoGPT is the prime example.
  • ...11 more annotations...
  • The idea is to give the system goals like “create a company” or “make some money.” Then it will keep looking for ways of reaching that goal, particularly if it is connected to other internet services.
  • A system like AutoGPT can generate computer programs. If researchers give it access to a computer server, it could actually run those programs. In theory, this is a way for AutoGPT to do almost anything online — retrieve information, use applications, create new applications, even improve itself.
  • Mr. Leahy argues that as researchers, companies and criminals give these systems goals like “make some money,” they could end up breaking into banking systems, fomenting revolution in a country where they hold oil futures or replicating themselves when someone tries to turn them off.
  • “People are actively trying to build systems that self-improve,” said Connor Leahy, the founder of Conjecture, a company that says it wants to align A.I. technologies with human values. “Currently, this doesn’t work. But someday, it will. And we don’t know when that day is.”
  • Systems like AutoGPT do not work well right now. They tend to get stuck in endless loops. Researchers gave one system all the resources it needed to replicate itself. It couldn’t do it.In time, those limitations could be fixed.
  • Because they learn from more data than even their creators can understand, these system also exhibit unexpected behavior. Researchers recently showed that one system was able to hire a human online to defeat a Captcha test. When the human asked if it was “a robot,” the system lied and said it was a person with a visual impairment.Some experts worry that as researchers make these systems more powerful, training them on ever larger amounts of data, they could learn more bad habits.
  • Who are the people behind these warnings?In the early 2000s, a young writer named Eliezer Yudkowsky began warning that A.I. could destroy humanity. His online posts spawned a community of believers.
  • Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, an A.I. lab that Google acquired in 2014. And many from the community of “EAs” worked inside these labs. They believed that because they understood the dangers of A.I., they were in the best position to build it.
  • The two organizations that recently released open letters warning of the risks of A.I. — the Center for A.I. Safety and the Future of Life Institute — are closely tied to this movement.
  • The recent warnings have also come from research pioneers and industry leaders like Elon Musk, who has long warned about the risks. The latest letter was signed by Sam Altman, the chief executive of OpenAI; and Demis Hassabis, who helped found DeepMind and now oversees a new A.I. lab that combines the top researchers from DeepMind and Google.
  • Other well-respected figures signed one or both of the warning letters, including Dr. Bengio and Geoffrey Hinton, who recently stepped down as an executive and researcher at Google. In 2018, they received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
Javier E

Ex-ByteDance Executive Accuses TikTok Parent Company of 'Lawlessness' - The New York Times - 0 views

  • A former executive at ByteDance, the Chinese company that owns TikTok, has accused the technology giant of a “culture of lawlessness,” including stealing content from rival platforms Snapchat and Instagram in its early years, and called the company a “useful propaganda tool for the Chinese Communist Party.
  • The claims were part of a wrongful dismissal suit filed on Friday by Yintao Yu, who was the head of engineering for ByteDance’s U.S. operations from August 2017 to November 2018. The complaint, filed in San Francisco Superior Court, says Mr. Yu was fired because he raised concerns about a “worldwide scheme” to steal and profit from other companies’ intellectual property.
  • Among the most striking claims in Mr. Yu’s lawsuit is that ByteDance’s offices in Beijing had a special unit of Chinese Communist Party members sometimes referred to as the Committee, which monitored the company’s apps, “guided how the company advanced core Communist values” and possessed a “death switch” that could turn off the Chinese apps entirely.
  • ...10 more annotations...
  • The video app, which is used by more than 150 million Americans, has become hugely popular for memes and entertainment. But lawmakers and U.S. officials are concerned that the app is passing sensitive information about Americans to Beijing.
  • In his complaint, Mr. Yu, 36, said that as TikTok sought to attract users in its early days, ByteDance engineers copied videos and posts from Snapchat and Instagram without permission and then posted them to the app. He also claimed that ByteDance “systematically created fabricated users” — essentially an army of bots — to boost engagement numbers, a practice that Mr. Yu said he flagged to his superiors.
  • Mr. Yu says he raised these concerns with Zhu Wenjia, who was in charge of the TikTok algorithm, but that Mr. Zhu was “dismissive” and remarked that it was “not a big deal.”
  • he also witnessed engineers for Douyin, the Chinese version of TikTok, tweak the algorithm to elevate content that expressed hatred for Japan.
  • he said that the promotion of anti-Japanese sentiments, which would make it more prominent for users, was done without hesitation.
  • “There was no debate,” he said. “They just did it.”
  • The lawsuit also accused ByteDance engineers working on Chinese apps of demoting content that expressed support for pro-democracy protests in Hong Kong, while making more prominent criticisms of the protests.
  • the lawsuit says the founder of ByteDance, Zhang Yiming, facilitated bribes to Lu Wei, a senior government official charged with internet regulation. Chinese media at the time covered the trial of Lu Wei, who was charged in 2018 and subsequently convicted of bribery, but there was no mention of who had paid the bribes.
  • Mr. Yu, who was born and raised in China and now lives in San Francisco, said in the interview that during his time with the company, American user data on TikTok was stored in the United States. But engineers in China had access to it, he said.
  • The geographic location of servers is “irrelevant,” he said, because engineers could be a continent away but still have access. During his tenure at the company, he said, certain engineers had “backdoor” access to user data.
Javier E

Xi Jinping's Favorite Television Shows - The Bulwark - 0 views

  • After several decades of getting it “right,” why does China now seem to insist on getting it “wrong?”
  • a single-party system meets with widespread, almost universal, scorn in the United States and elsewhere. And so, from the Western point of view, because it lacks legitimacy it must be kept in power via nationalist cheerleading, government media control, and a massive repressive apparatus.
  • Print
  • ...19 more annotations...
  • What if a segment of the population actually supported, or at least tolerated, the CCP? And even if that segment involved both myth and fact, it behooves the CCP to keep the myth alive.
  • How does the CCP garner popular support in an information era? How does a dictatorship explain to its population that its unchallenged rule is wise, just, and socially beneficial?
  • All of this takes place against a backdrop of family and social developments in which we can explore household dynamics, dating habits, and professional aspirations—all within social norms for those honest party members and seemingly violated by those who are not so honest.
  • watch the television series Renmin de Mingyi (“In the Name of the People”), publicly available with English subtitles.
  • In the Name of the People is a primetime drama about a local prosecutor’s efforts to root out corruption in a modern-day, though fictional, Chinese city. Beyond the anti-corruption narrative, the series also goes into local CCP politics as some of the leaders are (you guessed it) corrupt and others are simply bureaucratic time-servers, guarding their own privileges and status without actually helping the people they purport to serve.
  • the series boasts one of Xi’s other main themes, “common prosperity,” a somewhat elastic term that usually means the benefits of prosperity should be shared throughout all segments of society.
  • The historical tools used to generate support such as mass rallies and large-scale hectoring no longer work with a more educated and communications-oriented citizenry.
  • the central themes are quite clear: The party has brought historical prosperity to the community and there are a few bad apples who are unfairly trying to benefit from this wealth. There are also various sluggards and mediocrities who have no capacity for improvement or sense of public responsibilities.
  • So we see government officials pondering if they can ever find a date (being the workaholics that they are), or discussing housework with their spouses, or sharing kitchen duties, or reviewing school work with their child.
  • The show makes clear that the vast majority of party members and government officials are dedicated souls who work to improve peoples’ lives. And in the end, virtue triumphs, the party triumphs, China triumphs, and most (not all) of the personal issues are resolved as well.
  • The show’s version of the CCP eagerly and uncynically supports Chinese culture: The same union leader from the wildcat strike also writes and publishes poetry. Calligraphy is as prized as specialty teas. And all of this is told in a lively style, similar to the Hollywood fare Americans might watch.
  • n the Name of the People was first broadcast in 2017 as a lead-up to the last Communist Party Congress, China’s most important decision-making gathering, held every five years. The show’s launch was a huge hit, achieving the highest broadcast ratings of any show in a decade.
  • Within a month, the first episode had been seen over 350 million times and just one of the streaming platforms, iQIYI, reported a total of 5.9 billion views for the show’s 55 episodes.
  • All of this must come as good news for the prosecutors featured so favorably in the series—for their real-life parent government body, the Supreme People’s Protectorate, commissioned and provided financing for the show.
  • At a minimum, these shows illustrate a stronger self-awareness in the CCP and considerable improvement in communication strategy.
  • Most important, it provides direction to current party members. Indeed, in some cities viewing was made obligatory and the basis for “study sessions” for party cadres
  • Second, the enormous public success of the series and acknowledging deficiencies of the party allows the party to control the criticism without ever addressing the fundamental question of whether a one-party system is intrinsically susceptible to corruption or poor performance.
  • As communication specialists like to say, There is already a conversation taking place about your brand—the only question is whether you will lead the conversation. The CCP is leading in its communications strategy and making it as easy as possible for Chinese citizens to support Xi.
  • it is not difficult to see that in this area, as in many others, China is breaking with tactics from the past and is playing its cards increasingly well. Whether the CCP can renew itself, reestablish that social contract, and live up to its television image is another question.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
lilyrashkind

Lottery Numbers, Blockchain Articles And Cold Calls To Moscow: How Activists Are Using ... - 0 views

  • Early last year, Tobias Natterer, a copywriter at the ad agency DDB Berlin, began pondering how to evade Russian censors. His client, the German arm of nonprofit Reporters Without Borders (RSF), was looking for more effective ways to let Russians get the news their government didn’t want them to see. RSF had been duplicating censored websites and housing them on servers deemed too important for governments to block—a tactic known as collateral freedom. (“If the government tries to shoot down the website,” Natterer explains, “they also have to shoot down their own websites which is why it’s called collateral.”)
  • . Anyone searching those numbers on Twitter or other platforms would then find links to the banned site and forbidden news. Talk about timing. Just as they were about to launch the strategy in Russia and two other countries, Russian President Vladimir Putin gave the order to invade Ukraine. The Kremlin immediately clamped down on nationwide coverage of its actions, making the RSF/DDB experiment even more vital.
  • “We want to make sure that press freedom isn’t just seen as something defended by journalists themselves,” says Lisa Dittmer, RSF Germany’s advocacy officer for Internet freedom. “It’s something that is a core part of any democracy and it’s a core part of defending any kind of freedom that you have.”
  • ...8 more annotations...
  • Telegram videos and more. Ukrainian entrepreneurs are even hijacking their own apps to let Russians know what’s going on. While such efforts have mixed success, they demonstrate the ingenuity needed to win the information battle that’s as old as war itself.
  • Meanwhile, an organization called Squad303 built an online tool that lets people automatically send Russians texts, WhatsApp messages and emails. Some of the most effective strategies rely on old-school technologies. The use of virtual private networks, or VPNs, has skyrocketed in Russia since the war began. That may explain why the country’s telecom regulator has forced Google to delist thousands of URLs linked to VPN sites.
  • For Paulius Senūta, an advertising executive in Lithuania, the weapon of choice is the telephone. He recently launched “CallRussia,” a website that enables Russian speakers to cold-call random Russians based on a directory of 40 million phone numbers. Visitors to the site get a phone number along with a basic script developed by psychologists that advises callers to share their Russian connections and volunteer status before encouraging targets to hear what’s really going on. Suggested lines include “The only thing (Putin) seems to fear is information,” which then lets callers stress the need to put it “in the hands of Russians who know the truth and stand up to stop this war.” In its first eight days, Senūta says users from eastern Europe and elsewhere around the world placed nearly 100,000 calls to strangers in Russia.
  • “One thing is to call them and the other thing is how to talk with them,” says Senūta. As with any telemarketing call, the response from those on the receiving end has been mixed. While some have been receptive, others are angry at the interruption or suspicious that it’s a trick. “How do you speak to someone who has been in a different media environment?”
  • Terms like “war,” “invasion,” or “aggression” have been banned from coverage, punishable by fines of up to five million rubles (now roughly $52,000) or 15 years in prison. Says Kozlovsky: “It’s getting worse and worse.”
  • Arnold Schwarzenegger uploaded a lengthy video message to Russians via Telegram that included both Russian and English subtitles.) However, that it doesn’t mean it hurts to also try new things.
  • The question is whether Russians realize they’re being fed on a media diet of state-sponsored lies and criminalization of the truth. Dittmer believes many Russians are eager to know what’s really going on. So far, RSF’s “Truth Wins” campaign has been viewed more than 150,000 times in Russia. (Previous efforts by DDB and RSF in various countries have included embedding censored news in a virtual library within Minecraft and a playlist on Spotify.)
  • Censorship also cuts both ways. While Russian authorities have banned Facebook and Instagram as “extremist,” Western news outlets have in turn cut ties with state-controlled outlets because of Putin’s disinformation campaign. While pulling products and partnerships out of Russia may send a powerful message to the Kremlin, such isolation also risks leaving a bubble of disinformation intact. Luckily, “it’s pretty much impossible to censor effectively,” says RSF’s Dittmer, pointing to further efforts to use blockchain and gaming technology to spread news. “We can play the cat and mouse game with the internet censors in a slightly more sophisticated way.”
Javier E

The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT - WSJ - 0 views

  • Mr. Altman said he fears what could happen if AI is rolled out into society recklessly. He co-founded OpenAI eight years ago as a research nonprofit, arguing that it’s uniquely dangerous to have profits be the main driver of developing powerful AI models.
  • He is so wary of profit as an incentive in AI development that he has taken no direct financial stake in the business he built, he said—an anomaly in Silicon Valley, where founders of successful startups typically get rich off their equity. 
  • His goal, he said, is to forge a new world order in which machines free people to pursue more creative work. In his vision, universal basic income—the concept of a cash stipend for everyone, no strings attached—helps compensate for jobs replaced by AI. Mr. Altman even thinks that humanity will love AI so much that an advanced chatbot could represent “an extension of your will.”
  • ...44 more annotations...
  • The Tesla Inc. CEO tweeted in February that OpenAI had been founded as an open-source nonprofit “to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”
  • Backers say his brand of social-minded capitalism makes him the ideal person to lead OpenAI. Others, including some who’ve worked for him, say he’s too commercially minded and immersed in Silicon Valley thinking to lead a technological revolution that is already reshaping business and social life. 
  • In the long run, he said, he wants to set up a global governance structure that would oversee decisions about the future of AI and gradually reduce the power OpenAI’s executive team has over its technology. 
  • OpenAI researchers soon concluded that the most promising path to achieve artificial general intelligence rested in large language models, or computer programs that mimic the way humans read and write. Such models were trained on large volumes of text and required a massive amount of computing power that OpenAI wasn’t equipped to fund as a nonprofit, according to Mr. Altman. 
  • In its founding charter, OpenAI pledged to abandon its research efforts if another project came close to building AGI before it did. The goal, the company said, was to avoid a race toward building dangerous AI systems fueled by competition and instead prioritize the safety of humanity.
  • While running Y Combinator, Mr. Altman began to nurse a growing fear that large research labs like DeepMind, purchased by Google in 2014, were creating potentially dangerous AI technologies outside the public eye. Mr. Musk has voiced similar concerns of a dystopian world controlled by powerful AI machines. 
  • Messrs. Altman and Musk decided it was time to start their own lab. Both were part of a group that pledged $1 billion to the nonprofit, OpenAI Inc. 
  • Mr. Altman said he doesn’t necessarily need to be first to develop artificial general intelligence, a world long imagined by researchers and science-fiction writers where software isn’t just good at one specific task like generating text or images but can understand and learn as well or better than a human can. He instead said OpenAI’s ultimate mission is to build AGI, as it’s called, safely.
  • “We didn’t have a visceral sense of just how expensive this project was going to be,” he said. “We still don’t.”
  • Tensions also grew with Mr. Musk, who became frustrated with the slow progress and pushed for more control over the organization, people familiar with the matter said. 
  • OpenAI executives ended up reviving an unusual idea that had been floated earlier in the company’s history: creating a for-profit arm, OpenAI LP, that would report to the nonprofit parent. 
  • Reid Hoffman, a LinkedIn co-founder who advised OpenAI at the time and later served on the board, said the idea was to attract investors eager to make money from the commercial release of some OpenAI technology, accelerating OpenAI’s progress
  • “You want to be there first and you want to be setting the norms,” he said. “That’s part of the reason why speed is a moral and ethical thing here.”
  • The decision further alienated Mr. Musk, the people familiar with the matter said. He parted ways with OpenAI in February 2018. 
  • Mr. Musk announced his departure in a company all-hands, former employees who attended the meeting said. Mr. Musk explained that he thought he had a better chance at creating artificial general intelligence through Tesla, where he had access to greater resources, they said.
  • OpenAI said that it received about $130 million in contributions from the initial $1 billion pledge, but that further donations were no longer needed after the for-profit’s creation. Mr. Musk has tweeted that he donated around $100 million to OpenAI. 
  • Mr. Musk’s departure marked a turning point. Later that year, OpenAI leaders told employees that Mr. Altman was set to lead the company. He formally became CEO and helped complete the creation of the for-profit subsidiary in early 2019.
  • A young researcher questioned whether Mr. Musk had thought through the safety implications, the former employees said. Mr. Musk grew visibly frustrated and called the intern a “jackass,” leaving employees stunned, they said. It was the last time many of them would see Mr. Musk in person.  
  • In the meantime, Mr. Altman began hunting for investors. His break came at Allen & Co.’s annual conference in Sun Valley, Idaho in the summer of 2018, where he bumped into Satya Nadella, the Microsoft CEO, on a stairwell and pitched him on OpenAI. Mr. Nadella said he was intrigued. The conversations picked up that winter.
  • “I remember coming back to the team after and I was like, this is the only partner,” Mr. Altman said. “They get the safety stuff, they get artificial general intelligence. They have the capital, they have the ability to run the compute.”   
  • Mr. Altman disagreed. “The unusual thing about Microsoft as a partner is that it let us keep all the tenets that we think are important to our mission,” he said, including profit caps and the commitment to assist another project if it got to AGI first. 
  • Some employees still saw the deal as a Faustian bargain. 
  • OpenAI’s lead safety researcher, Dario Amodei, and his lieutenants feared the deal would allow Microsoft to sell products using powerful OpenAI technology before it was put through enough safety testing,
  • They felt that OpenAI’s technology was far from ready for a large release—let alone with one of the world’s largest software companies—worrying it could malfunction or be misused for harm in ways they couldn’t predict.  
  • Mr. Amodei also worried the deal would tether OpenAI’s ship to just one company—Microsoft—making it more difficult for OpenAI to stay true to its founding charter’s commitment to assist another project if it got to AGI first, the former employees said.
  • Microsoft initially invested $1 billion in OpenAI. While the deal gave OpenAI its needed money, it came with a hitch: exclusivity. OpenAI agreed to only use Microsoft’s giant computer servers, via its Azure cloud service, to train its AI models, and to give the tech giant the sole right to license OpenAI’s technology for future products.
  • In a recent investment deck, Anthropic said it was “committed to large-scale commercialization” to achieve the creation of safe AGI, and that it “fully committed” to a commercial approach in September. The company was founded as an AI safety and research company and said at the time that it might look to create commercial value from its products. 
  • Mr. Altman “has presided over a 180-degree pivot that seems to me to be only giving lip service to concern for humanity,” he said. 
  • “The deal completely undermines those tenets to which they secured nonprofit status,” said Gary Marcus, an emeritus professor of psychology and neural science at New York University who co-founded a machine-learning company
  • The cash turbocharged OpenAI’s progress, giving researchers access to the computing power needed to improve large language models, which were trained on billions of pages of publicly available text. OpenAI soon developed a more powerful language model called GPT-3 and then sold developers access to the technology in June 2020 through packaged lines of code known as application program interfaces, or APIs. 
  • Mr. Altman and Mr. Amodei clashed again over the release of the API, former employees said. Mr. Amodei wanted a more limited and staged release of the product to help reduce publicity and allow the safety team to conduct more testing on a smaller group of users, former employees said. 
  • Mr. Amodei left the company a few months later along with several others to found a rival AI lab called Anthropic. “They had a different opinion about how to best get to safe AGI than we did,” Mr. Altman said.
  • Anthropic has since received more than $300 million from Google this year and released its own AI chatbot called Claude in March, which is also available to developers through an API. 
  • Mr. Altman shared the contract with employees as it was being negotiated, hosting all-hands and office hours to allay concerns that the partnership contradicted OpenAI’s initial pledge to develop artificial intelligence outside the corporate world, the former employees said. 
  • In the three years after the initial deal, Microsoft invested a total of $3 billion in OpenAI, according to investor documents. 
  • More than one million users signed up for ChatGPT within five days of its November release, a speed that surprised even Mr. Altman. It followed the company’s introduction of DALL-E 2, which can generate sophisticated images from text prompts.
  • By February, it had reached 100 million users, according to analysts at UBS, the fastest pace by a consumer app in history to reach that mark.
  • n’s close associates praise his ability to balance OpenAI’s priorities. No one better navigates between the “Scylla of misplaced idealism” and the “Charybdis of myopic ambition,” Mr. Thiel said. 
  • Mr. Altman said he delayed the release of the latest version of its model, GPT-4, from last year to March to run additional safety tests. Users had reported some disturbing experiences with the model, integrated into Bing, where the software hallucinated—meaning it made up answers to questions it didn’t know. It issued ominous warnings and made threats. 
  • “The way to get it right is to have people engage with it, explore these systems, study them, to learn how to make them safe,” Mr. Altman said.
  • After Microsoft’s initial investment is paid back, it would capture 49% of OpenAI’s profits until the profit cap, up from 21% under prior arrangements, the documents show. OpenAI Inc., the nonprofit parent, would get the rest.
  • He has put almost all his liquid wealth in recent years in two companies. He has put $375 million into Helion Energy, which is seeking to create carbon-free energy from nuclear fusion and is close to creating “legitimate net-gain energy in a real demo,” Mr. Altman said.
  • He has also put $180 million into Retro, which aims to add 10 years to the human lifespan through “cellular reprogramming, plasma-inspired therapeutics and autophagy,” or the reuse of old and damaged cell parts, according to the company. 
  • He noted how much easier these problems are, morally, than AI. “If you’re making nuclear fusion, it’s all upside. It’s just good,” he said. “If you’re making AI, it is potentially very good, potentially very terrible.” 
Javier E

There's Probably Nothing We Can Do About This Awful Deepfake Porn Problem - 0 views

  • we can’t (as in, are unable to in real-world terms) censor far-right content online because of the basic reality of modern communications technology. The internet makes the transmission of information, no matter how ugly or shocking or secret, functionally impossible to stop. Digital infrastructure is spread out across the globe, including in regimes that do not play ball with American legal or corporate mandates, and there’s plenty of server racks out there in the world buzzing along that are inaccessible to even the most dedicated hall monitors
  • , it happens that I am one of those free speech absolutists, yes, but that is very explicitly not what the piece argues - it’s precisely an argument that whether we should censor is entirely moot, because we can’t. The technological impediments to cutting off the flow of information (at least that which is not tightly controlled at the supply-side) are now existential.
  • This is a reality people have to accept, even if - especially if - they think that reality is corrosive and ugly. I suspect it’s a similar story with all of this horrible AI “deepfake” celebrity porn.
  • ...1 more annotation...
  • The trouble is that, as I’ve seen again and again, in this era of entitlement people think saying “we can’t do this” necessarily means “I don’t want to.”
Javier E

In Silicon Valley, You Can Be Worth Billions and It's Not Enough - The New York Times - 0 views

  • He got a phone call about the imminent sale of a tech company and allegedly traded on the confidential information, according to charges filed by the Securities and Exchange Commission. The profit for a few minutes of work: $415,726.
  • rarely has anyone traded his reputation for seemingly so little reward. For Mr. Bechtolsheim, $415,726 was equivalent to a quarter rolling behind the couch. He was ranked No. 124 on the Bloomberg Billionaires Index last week, with an estimated fortune of $16 billion.
  • Last month, Mr. Bechtolsheim, 68, settled the insider trading charges without admitting wrongdoing. He agreed to pay a fine of more than $900,000 and will not serve as an officer or director of a public company for five years.
  • ...16 more annotations...
  • Nothing in his background seems to have brought him to this troubling point. Mr. Bechtolsheim was one of those who gave Silicon Valley its reputation as an engineer’s paradise, a place where getting rich was just something that happened by accident.
  • “He cared so much about making great technology that he would buy a house, not furnish it and sleep on a futon,” said Scott McNealy, who joined with Mr. Bechtolsheim four decades ago to create Sun Microsystems, a maker of computer workstations and servers that was a longtime tech powerhouse. “Money was not how he measured himself.”
  • researchers who analyze trading data say corporate executives broadly profit from confidential information. These executives try to avoid traditional insider trading restrictions by buying shares in economically linked firms, a phenomenon called “shadow trading.”
  • “There appears to be significant profits being made from shadow trading,” said Mihir N. Mehta, an assistant professor of accounting at the University of Michigan and an author of a 2021 study in The Accounting Review that found “robust evidence” of the behavior. “The people doing it have a sense of entitlement or maybe just think, ‘I’m invincible.’”
  • He went to Stanford as a Ph.D. student in the mid-1970s and got to know the then-small programming community around the university. In the early 1980s, he, along with Mr. McNealy, Vinod Khosla and Bill Joy, started Sun Microsystems as an outgrowth of a Stanford project. When Sun initially raised money, Mr. Bechtolsheim put his entire life savings — about $100,000 — into the company.
  • “You could end up losing all your money,” he was warned by the venture capitalists financing Sun. His response: “I see zero risk here.”
  • An impromptu demonstration was hastily arranged for 8 a.m., which Mr. Bechtolsheim cut short. He had seen enough, and besides, he had to get to the office. He gave them a check, and the deal was sealed, Mr. Levy wrote, “with as little fanfare as if he were grabbing a latte on the way to work.
  • Mr. Page and Mr. Brin couldn’t deposit Mr. Bechtolsheim’s check for a month because Google did not have a bank account. When Google went public in 2004, that $100,000 investment was worth at least $1 billion.
  • It wasn’t the money that made the story famous, however. It was the way it confirmed one of Silicon Valley’s most cherished beliefs about itself: that its genius is so blindingly obvious, questions are superfluous.
  • The dot-com boom was a disorienting period for longtime Valley leaders whose interest in money was muted. Mr. Bechtolsheim’s Sun colleague Mr. Joy left Silicon Valley.
  • “There’s so much money around, it’s clouding a lot of people’s ethics,” Mr. Joy said in a 1999 oral history
  • Mr. Bechtolsheim didn’t leave. In 2008, he co-founded Arista, a Silicon Valley computer networking company that went public and now has 4,000 employees and a stock market value of $100 billion.
  • Mr. Bechtolsheim was chair of Arista’s board when an executive from another company called in 2019, according to the S.E.C. Arista and the other company, which was not named in court documents, had a history of sharing confidential information under nondisclosure agreements.
  • immediately after hanging up, the government said, he bought Acacia option contracts in the accounts of a close relative and a colleague. The next day, the deal was announced. Acacia shares jumped 35 percent.
  • Arista’s code of conduct states that “employees who possess material, nonpublic information gained through their work at Arista may not trade in Arista securities or the securities of another company to which the information pertains.”
  • Mr. Levy, the “In the Plex” author, said there were plenty of legal ways to make money in Silicon Valley. “Someone who is regarded as an influential funder and is very well connected gets nearly unlimited opportunities to make very desirable early investments,”
« First ‹ Previous 81 - 90 of 90
Showing 20 items per page