Skip to main content

Home/ History Readings/ Group items matching ""arms race"" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
62More

Facebook Papers: 'History Will Not Judge Us Kindly' - The Atlantic - 0 views

  • Facebook’s hypocrisies, and its hunger for power and market domination, are not secret. Nor is the company’s conflation of free speech and algorithmic amplification
  • But the events of January 6 proved for many people—including many in Facebook’s workforce—to be a breaking point.
  • these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.
  • ...59 more annotations...
  • Again and again, the Facebook Papers show staffers sounding alarms about the dangers posed by the platform—how Facebook amplifies extremism and misinformation, how it incites violence, how it encourages radicalization and political polarization. Again and again, staffers reckon with the ways in which Facebook’s decisions stoke these harms, and they plead with leadership to do more.
  • And again and again, staffers say, Facebook’s leaders ignore them.
  • Facebook has dismissed the concerns of its employees in manifold ways.
  • One of its cleverer tactics is to argue that staffers who have raised the alarm about the damage done by their employer are simply enjoying Facebook’s “very open culture,” in which people are encouraged to share their opinions, a spokesperson told me. This stance allows Facebook to claim transparency while ignoring the substance of the complaints, and the implication of the complaints: that many of Facebook’s employees believe their company operates without a moral compass.
  • When you stitch together the stories that spanned the period between Joe Biden’s election and his inauguration, it’s easy to see Facebook as instrumental to the attack on January 6. (A spokesperson told me that the notion that Facebook played an instrumental role in the insurrection is “absurd.”)
  • what emerges from a close reading of Facebook documents, and observation of the manner in which the company connects large groups of people quickly, is that Facebook isn’t a passive tool but a catalyst. Had the organizers tried to plan the rally using other technologies of earlier eras, such as telephones, they would have had to identify and reach out individually to each prospective participant, then persuade them to travel to Washington. Facebook made people’s efforts at coordination highly visible on a global scale.
  • The platform not only helped them recruit participants but offered people a sense of strength in numbers. Facebook proved to be the perfect hype machine for the coup-inclined.
  • In November 2019, Facebook staffers noticed they had a serious problem. Facebook offers a collection of one-tap emoji reactions. Today, they include “like,” “love,” “care,” “haha,” “wow,” “sad,” and “angry.” Company researchers had found that the posts dominated by “angry” reactions were substantially more likely to go against community standards, including prohibitions on various types of misinformation, according to internal documents.
  • In July 2020, researchers presented the findings of a series of experiments. At the time, Facebook was already weighting the reactions other than “like” more heavily in its algorithm—meaning posts that got an “angry” reaction were more likely to show up in users’ News Feeds than posts that simply got a “like.” Anger-inducing content didn’t spread just because people were more likely to share things that made them angry; the algorithm gave anger-inducing content an edge. Facebook’s Integrity workers—employees tasked with tackling problems such as misinformation and espionage on the platform—concluded that they had good reason to believe targeting posts that induced anger would help stop the spread of harmful content.
  • By dialing anger’s weight back to zero in the algorithm, the researchers found, they could keep posts to which people reacted angrily from being viewed by as many users. That, in turn, translated to a significant (up to 5 percent) reduction in the hate speech, civic misinformation, bullying, and violent posts—all of which are correlated with offline violence—to which users were exposed.
  • Facebook rolled out the change in early September 2020, documents show; a Facebook spokesperson confirmed that the change has remained in effect. It was a real victory for employees of the Integrity team.
  • But it doesn’t normally work out that way. In April 2020, according to Frances Haugen’s filings with the SEC, Facebook employees had recommended tweaking the algorithm so that the News Feed would deprioritize the surfacing of content for people based on their Facebook friends’ behavior. The idea was that a person’s News Feed should be shaped more by people and groups that a person had chosen to follow. Up until that point, if your Facebook friend saw a conspiracy theory and reacted to it, Facebook’s algorithm might show it to you, too. The algorithm treated any engagement in your network as a signal that something was worth sharing. But now Facebook workers wanted to build circuit breakers to slow this form of sharing.
  • Experiments showed that this change would impede the distribution of hateful, polarizing, and violence-inciting content in people’s News Feeds. But Zuckerberg “rejected this intervention that could have reduced the risk of violence in the 2020 election,” Haugen’s SEC filing says. An internal message characterizing Zuckerberg’s reasoning says he wanted to avoid new features that would get in the way of “meaningful social interactions.” But according to Facebook’s definition, its employees say, engagement is considered “meaningful” even when it entails bullying, hate speech, and reshares of harmful content.
  • This episode, like Facebook’s response to the incitement that proliferated between the election and January 6, reflects a fundamental problem with the platform
  • Facebook’s megascale allows the company to influence the speech and thought patterns of billions of people. What the world is seeing now, through the window provided by reams of internal documents, is that Facebook catalogs and studies the harm it inflicts on people. And then it keeps harming people anyway.
  • “I am worried that Mark’s continuing pattern of answering a different question than the question that was asked is a symptom of some larger problem,” wrote one Facebook employee in an internal post in June 2020, referring to Zuckerberg. “I sincerely hope that I am wrong, and I’m still hopeful for progress. But I also fully understand my colleagues who have given up on this company, and I can’t blame them for leaving. Facebook is not neutral, and working here isn’t either.”
  • It is quite a thing to see, the sheer number of Facebook employees—people who presumably understand their company as well as or better than outside observers—who believe their employer to be morally bankrupt.
  • I spoke with several former Facebook employees who described the company’s metrics-driven culture as extreme, even by Silicon Valley standards
  • Facebook workers are under tremendous pressure to quantitatively demonstrate their individual contributions to the company’s growth goals, they told me. New products and features aren’t approved unless the staffers pitching them demonstrate how they will drive engagement.
  • e worries have been exacerbated lately by fears about a decline in new posts on Facebook, two former employees who left the company in recent years told me. People are posting new material less frequently to Facebook, and its users are on average older than those of other social platforms.
  • One of Facebook’s Integrity staffers wrote at length about this dynamic in a goodbye note to colleagues in August 2020, describing how risks to Facebook users “fester” because of the “asymmetrical” burden placed on employees to “demonstrate legitimacy and user value” before launching any harm-mitigation tactics—a burden not shared by those developing new features or algorithm changes with growth and engagement in mind
  • The note said:We were willing to act only after things had spiraled into a dire state … Personally, during the time that we hesitated, I’ve seen folks from my hometown go further and further down the rabbithole of QAnon and Covid anti-mask/anti-vax conspiracy on FB. It has been painful to observe.
  • Current and former Facebook employees describe the same fundamentally broken culture—one in which effective tactics for making Facebook safer are rolled back by leadership or never approved in the first place.
  • That broken culture has produced a broken platform: an algorithmic ecosystem in which users are pushed toward ever more extreme content, and where Facebook knowingly exposes its users to conspiracy theories, disinformation, and incitement to violence.
  • One example is a program that amounts to a whitelist for VIPs on Facebook, allowing some of the users most likely to spread misinformation to break Facebook’s rules without facing consequences. Under the program, internal documents show, millions of high-profile users—including politicians—are left alone by Facebook even when they incite violence
  • whitelisting influential users with massive followings on Facebook isn’t just a secret and uneven application of Facebook’s rules; it amounts to “protecting content that is especially likely to deceive, and hence to harm, people on our platforms.”
  • Facebook workers tried and failed to end the program. Only when its existence was reported in September by The Wall Street Journal did Facebook’s Oversight Board ask leadership for more information about the practice. Last week, the board publicly rebuked Facebook for not being “fully forthcoming” about the program.
  • As a result, Facebook has stoked an algorithm arms race within its ranks, pitting core product-and-engineering teams, such as the News Feed team, against their colleagues on Integrity teams, who are tasked with mitigating harm on the platform. These teams establish goals that are often in direct conflict with each other.
  • “We can’t pretend we don’t see information consumption patterns, and how deeply problematic they are for the longevity of democratic discourse,” a user-experience researcher wrote in an internal comment thread in 2019, in response to a now-infamous memo from Andrew “Boz” Bosworth, a longtime Facebook executive. “There is no neutral position at this stage, it would be powerfully immoral to commit to amorality.”
  • Zuckerberg has defined Facebook’s mission as making “social infrastructure to give people the power to build a global community that works for all of us,” but in internal research documents his employees point out that communities aren’t always good for society:
  • When part of a community, individuals typically act in a prosocial manner. They conform, they forge alliances, they cooperate, they organize, they display loyalty, they expect obedience, they share information, they influence others, and so on. Being in a group changes their behavior, their abilities, and, importantly, their capability to harm themselves or others
  • Thus, when people come together and form communities around harmful topics or identities, the potential for harm can be greater.
  • The infrastructure choices that Facebook is making to keep its platform relevant are driving down the quality of the site, and exposing its users to more dangers
  • hose dangers are also unevenly distributed, because of the manner in which certain subpopulations are algorithmically ushered toward like-minded groups
  • And the subpopulations of Facebook users who are most exposed to dangerous content are also most likely to be in groups where it won’t get reported.
  • And it knows that 3 percent of Facebook users in the United States are super-consumers of conspiracy theories, accounting for 37 percent of known consumption of misinformation on the platform.
  • Zuckerberg’s positioning of Facebook’s role in the insurrection is odd. He lumps his company in with traditional media organizations—something he’s ordinarily loath to do, lest the platform be expected to take more responsibility for the quality of the content that appears on it—and suggests that Facebook did more, and did better, than journalism outlets in its response to January 6. What he fails to say is that journalism outlets would never be in the position to help investigators this way, because insurrectionists don’t typically use newspapers and magazines to recruit people for coups.
  • Facebook wants people to believe that the public must choose between Facebook as it is, on the one hand, and free speech, on the other. This is a false choice. Facebook has a sophisticated understanding of measures it could take to make its platform safer without resorting to broad or ideologically driven censorship tactics.
  • Facebook knows that no two people see the same version of the platform, and that certain subpopulations experience far more dangerous versions than others do
  • Facebook knows that people who are isolated—recently widowed or divorced, say, or geographically distant from loved ones—are disproportionately at risk of being exposed to harmful content on the platform.
  • It knows that repeat offenders are disproportionately responsible for spreading misinformation.
  • All of this makes the platform rely more heavily on ways it can manipulate what its users see in order to reach its goals. This explains why Facebook is so dependent on the infrastructure of groups, as well as making reshares highly visible, to keep people hooked.
  • It could consistently enforce its policies regardless of a user’s political power.
  • Facebook could ban reshares.
  • It could choose to optimize its platform for safety and quality rather than for growth.
  • It could tweak its algorithm to prevent widespread distribution of harmful content.
  • Facebook could create a transparent dashboard so that all of its users can see what’s going viral in real time.
  • It could make public its rules for how frequently groups can post and how quickly they can grow.
  • It could also automatically throttle groups when they’re growing too fast, and cap the rate of virality for content that’s spreading too quickly.
  • Facebook could shift the burden of proof toward people and communities to demonstrate that they’re good actors—and treat reach as a privilege, not a right
  • You must be vigilant about the informational streams you swim in, deliberate about how you spend your precious attention, unforgiving of those who weaponize your emotions and cognition for their own profit, and deeply untrusting of any scenario in which you’re surrounded by a mob of people who agree with everything you’re saying.
  • It could do all of these things. But it doesn’t.
  • Lately, people have been debating just how nefarious Facebook really is. One argument goes something like this: Facebook’s algorithms aren’t magic, its ad targeting isn’t even that good, and most people aren’t that stupid.
  • All of this may be true, but that shouldn’t be reassuring. An algorithm may just be a big dumb means to an end, a clunky way of maneuvering a massive, dynamic network toward a desired outcome. But Facebook’s enormous size gives it tremendous, unstable power.
  • Facebook takes whole populations of people, pushes them toward radicalism, and then steers the radicalized toward one another.
  • When the most powerful company in the world possesses an instrument for manipulating billions of people—an instrument that only it can control, and that its own employees say is badly broken and dangerous—we should take notice.
  • The lesson for individuals is this:
  • Facebook could say that its platform is not for everyone. It could sound an alarm for those who wander into the most dangerous corners of Facebook, and those who encounter disproportionately high levels of harmful content
  • Without seeing how Facebook works at a finer resolution, in real time, we won’t be able to understand how to make the social web compatible with democracy.
18More

North Korea Fires 2 Ballistic Missiles After Lashing Out - The New York Times - 0 views

  • North Korea fired two ballistic missiles on Friday, its third missile test this month, hours after it warned of “stronger and certain reaction” if the United States helped impose more sanctions on the North in response to its recent series of missile tests.
  • ​Two short-range ballistic missiles took off from Uiju, a county near the northwestern corner of North Korea, and flew 267 miles before crashing into waters off the country’s east coast, the South Korean military said. It added that its analysts were studying the trajectory and other flight data from the launch to learn more.
  • The escalation also comes at a time when the Biden administration is struggling in its diplomacy to stave off a potential Russian invasion in Ukraine.
  • ...15 more annotations...
  • Earlier on Friday, the North’s Foreign Ministry issued a statement denouncing a proposal by the United States that the U.N. Security Council place fresh sanctions on North Korea following several ballistic and other missile tests since September 2021.
  • Separately on Wednesday, the Biden administration blacklisted five North Korean officials active in Russia and China who Washington said were responsible for procuring goods for North Korea’s weapons of mass destruction and ballistic missile-related programs.
  • North Korea resumed ​testing missiles in September​ after a six-month hiatus. It has since conducted at least seven missile tests, including the one on Friday. The tests involved a long-range strategic cruise missile, ballistic missiles rolled out of mountain tunnels and a mini submarine-launched ballistic missile.
  • All the tests violated U.N. Security Council resolutions that banned North Korea from developing or testing ballistic missile technologies or technologies used to make and deliver nuclear weapons. But the North’s Foreign Ministry insisted on Friday that it was exercising “its right to self-defense” and that the missile tests were “part of its efforts for modernizing its national defense capability.”
  • “The U.S. is intentionally escalating the situation even with the activation of independent sanctions, not content with referring the D.P.R.K.’s just activity to the U.N. Security Council,”
  • If the U.S. adopts such a confrontational stance, the D.P.R.K. will be forced to take stronger and certain reaction to it.”
  • But ​the country has resumed missile tests since meetings between its leader, Kim Jong-un, and Donald J. Trump, then president, ended without an agreement on how to roll back the North’s nuclear weapons program or when to lift sanctions.
  • Those tests indicated that the North was developing more sophisticated ways of delivering nuclear and other warheads to South Korea, Japan and American bases there on its shorter-range missiles, according to defense analysts.
  • Some of the missiles it has tested since 2019 have used solid fuel and have made midair maneuvers, making them harder to intercept, the analysts said.
  • But since the Kim-Trump diplomacy collapsed, North Korea has warned that it no longer felt bound by its self-imposed moratorium on nuclear and long-range missile tests. It has since unveiled its largest-ever, still-untested ICBM during ​a ​military parade and exhibition.
  • On Friday, North Korea reiterated that its missile tests “did not target any specific country or force and it did not do any harm to the security of neighboring countries.”
  • But in the test on Tuesday, the North’s hypersonic missile traversed the country from west to east and then veered to the northeast, flying over the waters between the Russian Far East and Japan toward the Pacific,
  • The missile hit a target 621 miles away, the North said. ​And as the missile hurtled out of North Korea at up to 10 times the speed of sound, aviation regulators briefly halted flights out of some airports on the U.S. West Coast as a precaution.
  • Washington has repeatedly urged North Korea to return to talks, but the country has said it would not until it was convinced that the United States would remove its “hostile” policy, including sanctions.
  • “Willful sanctions do not help resolve the Korean Peninsula issue, but only worsen the confrontational mood,
11More

6 Scandals That Rocked the Winter Olympics - HISTORY - 0 views

  • The Winter Olympics have been marked by controversy and scandal since the first Games in 1924. From cheating by East German lugers to the sordid Tonya Harding figure skating fiasco, here are six events that made headlines:
  • At the Games in Chamonix, France, Norwegians contended the 500-meter speedskating final had been mistimed in favor of American Charles Jewtraw, a heavy underdog who won the gold.
  • Jewtraw's win, by 1/5 of a second, stunned him. too. In a 1983 interview with Sports Illustrated, Jewtraw said he had never competed in the 500 prior to the gold-medal race and hadn't even trained for the Games.
  • ...8 more annotations...
  • Schranz’s supporters contended the mystery man had been a French policeman or soldier who had purposely interfered with the run to ensure Killy’s victory. The French hinted Schranz had made up the story."I was descending and I saw a dark shadow ahead of me," Schranz said at a news conference. "I wanted to avoid it, and I stopped. It was apparently a ski policeman."
  • The women's luge competition at the Grenoble Games was all but a lock for East Germany. Defending champ and gold-medal favorite Ortrun Enderlein stood in first; teammates Anna-Maria Müller and Angela Knösel were second and fourth. 
  • “A jury member acted immediately,” International Luge Federation president Bert Isatitsch said, according to UPI. "He went to the starting line and put his hands on the runners. They were warm."Isatitsch said East German officials used "foul language" when notified of the disqualification. “One waved his arms around, shouting and screaming," he told UPI. 
  • A month before the 1994 Winter Games, a man wielding a metal baton attacked gold- medal favorite Kerrigan during a practice at the U.S. Nationals, paving the way for Harding to win the event and to qualify for the Olympics. Soon afterward, however, it was discovered that Harding’s ex-husband, Jeff Gillooly, had planned the attack. With Kerrigan recovered—and Harding allowed to compete despite her not-yet-confirmed connection to the crime—the women’s figure skating competition became the hottest event at the Olympics. TV ratings soared.
  • Ice dancing got a dose of spy games in Nagano, Japan, when a Canadian judge secretly taped a conversation with another judge about picking winners before the competition.After her complaints to officials had been brushed aside, Jean Senft recorded Ukrainian judge Yuri Balkov discussing skater placements as proof of her accusations. During the call, Balkov said he would vote for Canadians if Senft voted for a Ukrainian pair."The athletes are not competing on a fair playing field," Senft later told CBC News. "This isn't sport. Somebody had to get proof."
  • were made. (Ice dancing was not removed from Olympic competition.) "If [cheating] happens at the world championships in some small town, nobody notices," Pound said, according to The New York Times. "But in the Olympics, hundreds of millions of people are watching."
  • The Russian team of Elena Berezhnaya and Anton Sikharulidze edged Canadians Jamie Sale and David Pelletier for the gold medal. But Marie-Reine Le Gougne, a French judge, came forward, saying she was pressured by the French ice sports federation to put the Russians first. “I knew very well who would vote in favor of the Russians and who would vote in favor of the Canadians," she told Reuters. "I was almost certain that I was the one who would award the Olympic title. What I feared would happen really did.”
  • Le Gougne was suspended from judging for three years and banned from the 2006 Winter Games. The scandal led to sweeping judging reforms in the sport. 
12More

The sinister spy who made our world a safer place - 0 views

  • Like Oppenheimer, Fuchs is an ambiguous and polarising character. A congressional hearing concluded he had “influenced the safety of more people and accomplished greater damage than any other spy in the history of nations”
  • But by helping the USSR to build the bomb, Fuchs also helped to forge the nuclear balance of power, the precarious equilibrium of mutually assured destruction under which we all still live.
  • Oppenheimer changed the world with science; and Fuchs changed it with espionage. It is impossible to understand the significance of one without the other.
  • ...9 more annotations...
  • In March 1940 two more exiled German scientists working at Birmingham University, Otto Frisch and Rudolf Peierls, outlined the first practical exposition of how to build a nuclear weapon, a device “capable of unleashing an explosion at a temperature comparable to that of the interior of the sun”. Peierls recruited Fuchs to join him in the top-secret project to develop a bomb, codenamed “Tube Alloys”.
  • Fuchs arrived as a refugee in Britain in 1933 and, like many scientists escaping Nazism, he was warmly welcomed by the academic community. At Edinburgh University he studied under the great physicist Max Born, another German exile.
  • Fuchs was extremely clever and very odd: chain-smoking, obsessively punctual, myopic, gangling and solitary, the “perfect specimen of an abstracted professor”, in the words of one colleague. He kept his political beliefs entirely concealed.
  • The son of a Lutheran pastor, Fuchs came of age in the economic chaos and violent political conflict of Weimar Germany. Like many young Germans, he embraced communism, the creed from which he never wavered. He was studying physics at Kiel University when his father was arrested for speaking out against Hitler. His mother killed herself by drinking hydrochloric acid. Returning from an anti-Nazi rally, he was beaten up and thrown into a river by fascist brownshirts. The German Communist Party told him to flee.
  • When Churchill and Roosevelt agreed to collaborate on building the bomb (while excluding the Soviet Union), “Tube Alloys” was absorbed into the far more ambitious Manhattan Project. Fuchs was one of 17 British-based scientists to join Oppenheimer at Los Alamos.
  • “I never saw myself as a spy,” Fuchs later insisted. “I just couldn’t understand why the West was not prepared to share the atom bomb with Moscow. I was of the opinion that something with that immense destructive potential should be made available to the big powers equally.”
  • In June 1945 Gold was waiting on a bench in Santa Fe when Fuchs drove up in his dilapidated car and handed over what his latest biographer calls “a virtual blueprint for the Trinity device”, the codename for the first test of a nuclear bomb a month later. When the Soviet Union carried out its own test in Kazakhstan in 1949, the CIA was astonished, believing Moscow’s atomic weapons programme was years behind the West. America’s nuclear superiority evaporated; the atomic arms race was on.
  • Fuchs was a naive narcissist and a traitor to the country that gave him shelter. He was entirely obedient to his KGB masters, who justified his actions with hindsight. But without him, there might have been only one superpower. Some in the Truman administration argued that the bomb should be used on the Soviet Union before it developed its own. Fuchs and the other atomic spies enabled Moscow to keep nuclear pace with the West, maintaining a fragile peace.
  • As the father of the atomic bomb, Oppenheimer made the world markedly less secure. Fuchs, paradoxically, made it safer.
22More

Opinion | Why Barbie and Ken Need Each Other - The New York Times - 0 views

  • Between the middle of the 1970s and the late 2010s, in their responses to the General Social Survey, American women reported themselves to be steadily unhappier. The trend was not drastic, but it was consistent: Women were less happy in the 1980s than they were in the 1970s, less happy in the Obama era than the Clinton era, and still less happy under Trump.
  • For men, the trend was more complex. They started out slightly unhappier than women and then made gains in the Reagan and Clinton years, while female happiness declined. But then male unhappiness plunged between the 9/11 era and Barack Obama’s re-election in 2012, before stabilizing a bit thereafter. By the pre-Covid period, the sexes were close to parity — sharing more reported unhappiness than either had been experiencing 30 or 40 years before.
  • These figures are drawn out of a fascinating new paper, “The Socio-Political Demography of Happiness,” from the University of Chicago economist Sam Peltzman
  • ...19 more annotations...
  • a different trend covered in the Peltzman paper: the persistent happiness advantage enjoyed by married couples over the unmarried, which has slightly widened since the early 1970s and now sits at around 35 points on a scale running from -100 to 100.
  • Over that same period, Americans have become much less likely to be married overall. In 1970, just 9 percent of people ages 25 to 50 had never tied the knot; in 2018, it was 35 percent.
  • the simplest possible explanation for declining happiness: For women maybe first, and for men too, eventually, less wedlock means more woe.
  • Barbieland itself is a female-first utopia that looks fundamentally dystopian — plastic, denatured, death-denying, cut off from love and procreation. The way that Barbiedom marginalizes images of pregnancy and motherhood, to say nothing of literal baby dolls, is a running preoccupation of the film
  • Is the Greta Gerwig movie proudly feminist, crypto-conservative or somewhere in between?
  • The simplest reading is the feminist one. The movie depicts a dolltopia where Barbies occupy every important job and office (with their Kens as arm candy) and tell themselves that their example has solved all of women’s problems in the real world, too — only to discover, when Margot Robbie’s “stereotypical Barbie” goes on a quest into our own contemporary reality, that sexism still exists, the patriarchy is disguised but maybe still resilient, the board of Mattel is proudly “feminist” but all male, and early 21st-century women are being asked to do it all for meager recompense.
  • Michael Knowles of The Daily Wire claims, “conservative, anti-feminist, pro-family, pro-motherhood” themes
  • In part, the conservative spin comes from the sheer fun of Gosling’s performance
  • I want to talk about these findings in the light of the running debate about the true ideological perspective of the billion-dollar box-office juggernaut “Barbie.”
  • Ken’s plight is treated sympathetically — he’s mostly running his coup to impress Barbie, and what are men for in the post-sexual-revolution landscape, anyway?
  • Barbie’s own arc is away from the female-dominated dystopia and back toward embodied womanhood, the real world with all its patriarchal holdovers
  • “Barbie” is a movie with a feminist default, but also complicated and sometimes muddled feelings about what the sexual revolution has done and where feminism ought to go.
  • It’s against the resilient patriarchy, but wary of the girlboss alternative
  • It wants womanhood and motherhood, but it doesn’t want the Kens back in charge, and it doesn’t really know what purpose men should serve.
  • In each narrative, the one way that the current dissatisfactions of women and men can’t be resolved is with the happy ending that even stories about the battle of the sexes used to take for granted — not a rearrangement of political power but a romantic partnership, not one sex’s rule but both sexes’ contentment.
  • so the movie ends — again, spoiler — with Barbie out of Barbieland but on her own, seeking out some sort of reproductive destiny at the gynecologist with a mother-daughter cheerleading squad beside her and no Ken in sight.
  • There’s an interesting parallel to the ending of Lena Dunham’s series “Girls,”
  • A guy can literally organize a revolution and it still isn’t enough to make Barbie see him as a lover, a romantic partner, an erotic object, a husband or a father.
  • In the movie they made, “Barbie and Ken” is a statement of reverse subordination, female rule and male eclipse. But in reality, nothing may matter as much to male and female happiness, and indeed, to the future of the human race, as whether Barbie and Ken can make that “and” into something reciprocal and fertile — a bridge, a bond, a marriage.
20More

Opinion | Big Tech Is Bad. Big A.I. Will Be Worse. - The New York Times - 0 views

  • Tech giants Microsoft and Alphabet/Google have seized a large lead in shaping our potentially A.I.-dominated future. This is not good news. History has shown us that when the distribution of information is left in the hands of a few, the result is political and economic oppression. Without intervention, this history will repeat itself.
  • The fact that these companies are attempting to outpace each other, in the absence of externally imposed safeguards, should give the rest of us even more cause for concern, given the potential for A.I. to do great harm to jobs, privacy and cybersecurity. Arms races without restrictions generally do not end well.
  • We believe the A.I. revolution could even usher in the dark prophecies envisioned by Karl Marx over a century ago. The German philosopher was convinced that capitalism naturally led to monopoly ownership over the “means of production” and that oligarchs would use their economic clout to run the political system and keep workers poor.
  • ...17 more annotations...
  • Literacy rates rose alongside industrialization, although those who decided what the newspapers printed and what people were allowed to say on the radio, and then on television, were hugely powerful. But with the rise of scientific knowledge and the spread of telecommunications came a time of multiple sources of information and many rival ways to process facts and reason out implications.
  • With the emergence of A.I., we are about to regress even further. Some of this has to do with the nature of the technology. Instead of assessing multiple sources, people are increasingly relying on the nascent technology to provide a singular, supposedly definitive answer.
  • This technology is in the hands of two companies that are philosophically rooted in the notion of “machine intelligence,” which emphasizes the ability of computers to outperform humans in specific activities.
  • This philosophy was naturally amplified by a recent (bad) economic idea that the singular objective of corporations should be to maximize short-term shareholder wealth.
  • Combined together, these ideas are cementing the notion that the most productive applications of A.I. replace humankind.
  • Congress needs to assert individual ownership rights over underlying data that is relied on to build A.I. systems
  • Fortunately, Marx was wrong about the 19th-century industrial age that he inhabited. Industries emerged much faster than he expected, and new firms disrupted the economic power structure. Countervailing social powers developed in the form of trade unions and genuine political representation for a broad swath of society.
  • History has repeatedly demonstrated that control over information is central to who has power and what they can do with it.
  • Generative A.I. requires even deeper pockets than textile factories and steel mills. As a result, most of its obvious opportunities have already fallen into the hands of Microsoft, with its market capitalization of $2.4 trillion, and Alphabet, worth $1.6 trillion.
  • At the same time, powers like trade unions have been weakened by 40 years of deregulation ideology (Ronald Reagan, Margaret Thatcher, two Bushes and even Bill Clinton
  • For the same reason, the U.S. government’s ability to regulate anything larger than a kitten has withered. Extreme polarization and fear of killing the golden (donor) goose or undermining national security mean that most members of Congress would still rather look away.
  • To prevent data monopolies from ruining our lives, we need to mobilize effective countervailing power — and fast.
  • Today, those countervailing forces either don’t exist or are greatly weakened
  • Rather than machine intelligence, what we need is “machine usefulness,” which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies
  • We also need regulation that protects privacy and pushes back against surveillance capitalism, or the pervasive use of technology to monitor what we do
  • Finally, we need a graduated system for corporate taxes, so that tax rates are higher for companies when they make more profit in dollar terms
  • Our future should not be left in the hands of two powerful companies that build ever larger global empires based on using our collective data without scruple and without compensation.
15More

OpenAI CEO Calls for Collaboration With China to Counter AI Risks - WSJ - 0 views

  • As the U.S. seeks to contain China’s progress in artificial intelligence through sanctions, OpenAI CEO Sam Altman is choosing engagement.
  • Altman emphasized the importance of collaboration between American and Chinese researchers to mitigate the risks of AI systems, against a backdrop of escalating competition between Washington and Beijing to lead in the technology. 
  • “China has some of the best AI talent in the world,” Altman said. “So I really hope Chinese AI researchers will make great contributions here.”
  • ...12 more annotations...
  • Altman and Geoff Hinton, a so-called godfather of AI who quit Google to warn of the potential dangers of AI, were among more than a dozen American and British AI executives and senior researchers from companies including chip maker Nvidia and generative AI leaders Midjourney and Anthropic who spoke at the conference. 
  • “This event is extremely rare in U.S.-China AI conversations,” said Jenny Xiao, a partner at venture-capital firm Leonis Capital and who researches AI and China. “It’s important to bring together leading voices in the U.S. and China to avoid issues such as AI arms racing, competition between labs and to help establish international standards,” she added.
  • By some metrics, China now produces more high-quality research papers in the field than the U.S. but still lags behind in “paradigm-shifting breakthroughs,” according to an analysis from The Brookings Institution. In generative AI, the latest wave of top-tier AI systems, China remains one to two years behind U.S. development and reliant on U.S. innovations, China tech watchers and industry leaders have said. 
  • The competition between Washington and Beijing belies deep cross-border connections among researchers: The U.S. and China remain each other’s number one collaborators in AI research,
  • During a congressional testimony in May, Altman warned that a peril of AI regulation is that “you slow down American industry in such a way that China or somebody else makes faster progress.”
  • At the same time, he added that it was important to continue engaging in global conversations. “This technology will impact Americans and all of us wherever it’s developed,”
  • Altman delivered the opening keynote for a session dedicated to AI safety and alignment, a hotly contested area of research that aims to mitigate the harmful impacts of AI on society. Hinton delivered the closing talk for the same session later Saturday, also dialing in. He presented his research that had made him more concerned about the risks of AI and appealed to young Chinese researchers in the audience to help work on solving these problems.
  • “Over time you should expect us to open-source more models in the future,” Altman said but added that it would be important to strike a balance to avoid abuses of the technology.
  • He has emphasized cautious regulation as European regulators consider the AI Act, viewed as one of the most ambitious plans globally to create guardrails that would address the technology’s impact on human rights, health and safety, and on tech giants’ monopolistic behavior.
  • Chinese regulators have also pressed forward on enacting strict rules for AI development that share significant overlap with the EU act but impose additional censorship measures that ban generating false or politically sensitive speech.
  • Tegmark, who attended in person, strode onto the stage smiling and waved at the crowd before opening with a few lines of Mandarin.
  • “For the first time now we have a situation where both East and West have the same incentive to continue building AI to get to all the benefits but not go so fast that we lose control,” Tegmark said, after warning the audience about catastrophic risks that could arise from careless AI development. “This is something we can all work together on.”
7More

AI could end independent UK news, Mail owner warns - 0 views

  • Artificial intelligence could destroy independent news organisations in Britain and potentially is an “existential threat to democracy”, the executive chairman of DMGT has warned.
  • “They have basically taken all our data, without permission and without even a consideration of the consequences. They are using it to train their models and to start producing content. They’re commercialising it,
  • AI had the potential to destroy independent news organisations “by ripping off all our content and then repurposing it to people … without any responsibility for the efficacy of that content”
  • ...4 more annotations...
  • there are huge consequences to this technology. And it’s not just the danger of ripping our industry apart, but also ripping other industries apart, all the creative industries. How many jobs are going to be lost? What’s the damage to the economy going to be if these rapacious organisations can continue to operate without any legal ramifications?
  • The danger is that these huge platforms end up in an arms race with each other. They’re like elephants fighting and then everybody else is like mice that get stamped on without them even realising the consequences of their actions.”
  • The risk was that the internet had become an echo chamber of stories produced by special interest groups and rogue states, he said.
  • Rothermere revealed that DMGT had experimented with using AI to help journalists to publish stories faster, but that it then took longer “to check the accuracy of what it comes up” than it would have done to write the article.
16More

Opinion | The OpenAI drama explains the human penchant for risk-taking - The Washington... - 0 views

  • Along with more pedestrian worries about various ways that AI could harm users, one side worried that ChatGPT and its many cousins might thrust humanity onto a kind of digital bobsled track, terminating in disaster — either with the machines wiping out their human progenitors or with humans using the machines to do so themselves. Once things start moving in earnest, there’s no real way to slow down or bail out, so the worriers wanted everyone to sit down and have a long think before getting anything rolling too fast.
  • Skeptics found all this a tad overwrought. For one thing, it left out all the ways in which AI might save humanity by providing cures for aging or solutions to global warming. And many folks thought it would be years before computers could possess anything approaching true consciousness, so we could figure out the safety part as we go. Still others were doubtful that truly sentient machines were even on the horizon; they saw ChatGPT and its many relatives as ultrasophisticated electronic parrots
  • Worrying that such an entity might decide it wants to kill people is a bit like wondering whether your iPhone would prefer to holiday in Crete or Majorca next summer.
  • ...13 more annotations...
  • OpenAI was was trying to balance safety and development — a balance that became harder to maintain under the pressures of commercialization.
  • It was founded as a nonprofit by people who professed sincere concern about taking things safe and slow. But it was also full of AI nerds who wanted to, you know, make cool AIs.
  • OpenAI set up a for-profit arm — but with a corporate structure that left the nonprofit board able to cry “stop” if things started moving too fast (or, if you prefer, gave “a handful of people with no financial stake in the company the power to upend the project on a whim”).
  • On Friday, those people, in a fit of whimsy, kicked Brockman off the board and fired Altman. Reportedly, the move was driven by Ilya Sutskever, OpenAI’s chief scientist, who, along with other members of the board, has allegedly clashed repeatedly with Altman over the speed of generative AI development and the sufficiency of safety precautions.
  • Chief among the signatories was Sutskever, who tweeted Monday morning, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”
  • Humanity can’t help itself; we have kept monkeying with technology, no matter the dangers, since some enterprising hominid struck the first stone ax.
  • a software company has little in the way of tangible assets; its people are its capital. And this capital looks willing to follow Altman to where the money is.
  • More broadly still, it perfectly encapsulates the AI alignment problem, which in the end is also a human alignment problem
  • And that’s why we are probably not going to “solve” it so much as hope we don’t have to.
  • it’s also a valuable general lesson about corporate structure and corporate culture. The nonprofit’s altruistic mission was in tension with the profit-making, AI-generating part — and when push came to shove, the profit-making part won.
  • When scientists started messing with the atom, there were real worries that nuclear weapons might set Earth’s atmosphere on fire. By the time an actual bomb was exploded, scientists were pretty sure that wouldn’t happen
  • But if the worries had persisted, would anyone have behaved differently — knowing that it might mean someone else would win the race for a superweapon? Better to go forward and ensure that at least the right people were in charge.
  • Now consider Sutskever: Did he change his mind over the weekend about his disputes with Altman? More likely, he simply realized that, whatever his reservations, he had no power to stop the bobsled — so he might as well join his friends onboard. And like it or not, we’re all going with them.
28More

The Only Way to Deal With the Threat From AI? Shut It Down | Time - 0 views

  • An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-
  • This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin
  • he rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.
  • ...25 more annotations...
  • The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
  • Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
  • It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
  • Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
  • Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.
  • The likely result of humanity facing down an opposed superhuman intelligence is a total loss
  • To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
  • There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.
  • An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria.
  • I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.
  • I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.
  • the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.
  • If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow.
  • We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems
  • Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs.
  • This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.
  • When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.
  • The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth
  • Here’s what would actually need to be done:
  • Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs
  • Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithm
  • Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
  • Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool
  • Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
  • when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
47More

The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT - WSJ - 0 views

  • Mr. Altman said he fears what could happen if AI is rolled out into society recklessly. He co-founded OpenAI eight years ago as a research nonprofit, arguing that it’s uniquely dangerous to have profits be the main driver of developing powerful AI models.
  • He is so wary of profit as an incentive in AI development that he has taken no direct financial stake in the business he built, he said—an anomaly in Silicon Valley, where founders of successful startups typically get rich off their equity. 
  • His goal, he said, is to forge a new world order in which machines free people to pursue more creative work. In his vision, universal basic income—the concept of a cash stipend for everyone, no strings attached—helps compensate for jobs replaced by AI. Mr. Altman even thinks that humanity will love AI so much that an advanced chatbot could represent “an extension of your will.”
  • ...44 more annotations...
  • The Tesla Inc. CEO tweeted in February that OpenAI had been founded as an open-source nonprofit “to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”
  • Backers say his brand of social-minded capitalism makes him the ideal person to lead OpenAI. Others, including some who’ve worked for him, say he’s too commercially minded and immersed in Silicon Valley thinking to lead a technological revolution that is already reshaping business and social life. 
  • In the long run, he said, he wants to set up a global governance structure that would oversee decisions about the future of AI and gradually reduce the power OpenAI’s executive team has over its technology. 
  • OpenAI researchers soon concluded that the most promising path to achieve artificial general intelligence rested in large language models, or computer programs that mimic the way humans read and write. Such models were trained on large volumes of text and required a massive amount of computing power that OpenAI wasn’t equipped to fund as a nonprofit, according to Mr. Altman. 
  • In its founding charter, OpenAI pledged to abandon its research efforts if another project came close to building AGI before it did. The goal, the company said, was to avoid a race toward building dangerous AI systems fueled by competition and instead prioritize the safety of humanity.
  • While running Y Combinator, Mr. Altman began to nurse a growing fear that large research labs like DeepMind, purchased by Google in 2014, were creating potentially dangerous AI technologies outside the public eye. Mr. Musk has voiced similar concerns of a dystopian world controlled by powerful AI machines. 
  • Messrs. Altman and Musk decided it was time to start their own lab. Both were part of a group that pledged $1 billion to the nonprofit, OpenAI Inc. 
  • Mr. Altman said he doesn’t necessarily need to be first to develop artificial general intelligence, a world long imagined by researchers and science-fiction writers where software isn’t just good at one specific task like generating text or images but can understand and learn as well or better than a human can. He instead said OpenAI’s ultimate mission is to build AGI, as it’s called, safely.
  • “We didn’t have a visceral sense of just how expensive this project was going to be,” he said. “We still don’t.”
  • Tensions also grew with Mr. Musk, who became frustrated with the slow progress and pushed for more control over the organization, people familiar with the matter said. 
  • OpenAI executives ended up reviving an unusual idea that had been floated earlier in the company’s history: creating a for-profit arm, OpenAI LP, that would report to the nonprofit parent. 
  • Reid Hoffman, a LinkedIn co-founder who advised OpenAI at the time and later served on the board, said the idea was to attract investors eager to make money from the commercial release of some OpenAI technology, accelerating OpenAI’s progress
  • “You want to be there first and you want to be setting the norms,” he said. “That’s part of the reason why speed is a moral and ethical thing here.”
  • The decision further alienated Mr. Musk, the people familiar with the matter said. He parted ways with OpenAI in February 2018. 
  • Mr. Musk announced his departure in a company all-hands, former employees who attended the meeting said. Mr. Musk explained that he thought he had a better chance at creating artificial general intelligence through Tesla, where he had access to greater resources, they said.
  • OpenAI said that it received about $130 million in contributions from the initial $1 billion pledge, but that further donations were no longer needed after the for-profit’s creation. Mr. Musk has tweeted that he donated around $100 million to OpenAI. 
  • Mr. Musk’s departure marked a turning point. Later that year, OpenAI leaders told employees that Mr. Altman was set to lead the company. He formally became CEO and helped complete the creation of the for-profit subsidiary in early 2019.
  • A young researcher questioned whether Mr. Musk had thought through the safety implications, the former employees said. Mr. Musk grew visibly frustrated and called the intern a “jackass,” leaving employees stunned, they said. It was the last time many of them would see Mr. Musk in person.  
  • In the meantime, Mr. Altman began hunting for investors. His break came at Allen & Co.’s annual conference in Sun Valley, Idaho in the summer of 2018, where he bumped into Satya Nadella, the Microsoft CEO, on a stairwell and pitched him on OpenAI. Mr. Nadella said he was intrigued. The conversations picked up that winter.
  • “I remember coming back to the team after and I was like, this is the only partner,” Mr. Altman said. “They get the safety stuff, they get artificial general intelligence. They have the capital, they have the ability to run the compute.”   
  • Mr. Altman disagreed. “The unusual thing about Microsoft as a partner is that it let us keep all the tenets that we think are important to our mission,” he said, including profit caps and the commitment to assist another project if it got to AGI first. 
  • Some employees still saw the deal as a Faustian bargain. 
  • OpenAI’s lead safety researcher, Dario Amodei, and his lieutenants feared the deal would allow Microsoft to sell products using powerful OpenAI technology before it was put through enough safety testing,
  • They felt that OpenAI’s technology was far from ready for a large release—let alone with one of the world’s largest software companies—worrying it could malfunction or be misused for harm in ways they couldn’t predict.  
  • Mr. Amodei also worried the deal would tether OpenAI’s ship to just one company—Microsoft—making it more difficult for OpenAI to stay true to its founding charter’s commitment to assist another project if it got to AGI first, the former employees said.
  • Microsoft initially invested $1 billion in OpenAI. While the deal gave OpenAI its needed money, it came with a hitch: exclusivity. OpenAI agreed to only use Microsoft’s giant computer servers, via its Azure cloud service, to train its AI models, and to give the tech giant the sole right to license OpenAI’s technology for future products.
  • In a recent investment deck, Anthropic said it was “committed to large-scale commercialization” to achieve the creation of safe AGI, and that it “fully committed” to a commercial approach in September. The company was founded as an AI safety and research company and said at the time that it might look to create commercial value from its products. 
  • Mr. Altman “has presided over a 180-degree pivot that seems to me to be only giving lip service to concern for humanity,” he said. 
  • “The deal completely undermines those tenets to which they secured nonprofit status,” said Gary Marcus, an emeritus professor of psychology and neural science at New York University who co-founded a machine-learning company
  • The cash turbocharged OpenAI’s progress, giving researchers access to the computing power needed to improve large language models, which were trained on billions of pages of publicly available text. OpenAI soon developed a more powerful language model called GPT-3 and then sold developers access to the technology in June 2020 through packaged lines of code known as application program interfaces, or APIs. 
  • Mr. Altman and Mr. Amodei clashed again over the release of the API, former employees said. Mr. Amodei wanted a more limited and staged release of the product to help reduce publicity and allow the safety team to conduct more testing on a smaller group of users, former employees said. 
  • Mr. Amodei left the company a few months later along with several others to found a rival AI lab called Anthropic. “They had a different opinion about how to best get to safe AGI than we did,” Mr. Altman said.
  • Anthropic has since received more than $300 million from Google this year and released its own AI chatbot called Claude in March, which is also available to developers through an API. 
  • Mr. Altman shared the contract with employees as it was being negotiated, hosting all-hands and office hours to allay concerns that the partnership contradicted OpenAI’s initial pledge to develop artificial intelligence outside the corporate world, the former employees said. 
  • In the three years after the initial deal, Microsoft invested a total of $3 billion in OpenAI, according to investor documents. 
  • More than one million users signed up for ChatGPT within five days of its November release, a speed that surprised even Mr. Altman. It followed the company’s introduction of DALL-E 2, which can generate sophisticated images from text prompts.
  • By February, it had reached 100 million users, according to analysts at UBS, the fastest pace by a consumer app in history to reach that mark.
  • n’s close associates praise his ability to balance OpenAI’s priorities. No one better navigates between the “Scylla of misplaced idealism” and the “Charybdis of myopic ambition,” Mr. Thiel said. 
  • Mr. Altman said he delayed the release of the latest version of its model, GPT-4, from last year to March to run additional safety tests. Users had reported some disturbing experiences with the model, integrated into Bing, where the software hallucinated—meaning it made up answers to questions it didn’t know. It issued ominous warnings and made threats. 
  • “The way to get it right is to have people engage with it, explore these systems, study them, to learn how to make them safe,” Mr. Altman said.
  • After Microsoft’s initial investment is paid back, it would capture 49% of OpenAI’s profits until the profit cap, up from 21% under prior arrangements, the documents show. OpenAI Inc., the nonprofit parent, would get the rest.
  • He has put almost all his liquid wealth in recent years in two companies. He has put $375 million into Helion Energy, which is seeking to create carbon-free energy from nuclear fusion and is close to creating “legitimate net-gain energy in a real demo,” Mr. Altman said.
  • He has also put $180 million into Retro, which aims to add 10 years to the human lifespan through “cellular reprogramming, plasma-inspired therapeutics and autophagy,” or the reuse of old and damaged cell parts, according to the company. 
  • He noted how much easier these problems are, morally, than AI. “If you’re making nuclear fusion, it’s all upside. It’s just good,” he said. “If you’re making AI, it is potentially very good, potentially very terrible.” 
47More

Sam Altman, the ChatGPT King, Is Pretty Sure It's All Going to Be OK - The New York Times - 0 views

  • He believed A.G.I. would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.
  • “I try to be upfront,” he said. “Am I doing something good? Or really bad?”
  • In 2023, people are beginning to wonder if Sam Altman was more prescient than they realized.
  • ...44 more annotations...
  • And yet, when people act as if Mr. Altman has nearly realized his long-held vision, he pushes back.
  • This past week, more than a thousand A.I. experts and tech leaders called on OpenAI and other companies to pause their work on systems like ChatGPT, saying they present “profound risks to society and humanity.”
  • As people realize that this technology is also a way of spreading falsehoods or even persuading people to do things they should not do, some critics are accusing Mr. Altman of reckless behavior.
  • “The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” he told me on a recent afternoon. There is time, he said, to better understand how these systems will ultimately change the world.
  • Many industry leaders, A.I. researchers and pundits see ChatGPT as a fundamental technological shift, as significant as the creation of the web browser or the iPhone. But few can agree on the future of this technology.
  • Some believe it will deliver a utopia where everyone has all the time and money ever needed. Others believe it could destroy humanity. Still others spend much of their time arguing that the technology is never as powerful as everyone says it is, insisting that neither nirvana nor doomsday is as close as it might seem.
  • he is often criticized from all directions. But those closest to him believe this is as it should be. “If you’re equally upsetting both extreme sides, then you’re doing something right,” said OpenAI’s president, Greg Brockman.
  • To spend time with Mr. Altman is to understand that Silicon Valley will push this technology forward even though it is not quite sure what the implications will be
  • in 2019, he paraphrased Robert Oppenheimer, the leader of the Manhattan Project, who believed the atomic bomb was an inevitability of scientific progress. “Technology happens because it is possible,” he said
  • His life has been a fairly steady climb toward greater prosperity and wealth, driven by an effective set of personal skills — not to mention some luck. It makes sense that he believes that the good thing will happen rather than the bad.
  • He said his company was building technology that would “solve some of our most pressing problems, really increase the standard of life and also figure out much better uses for human will and creativity.”
  • He was not exactly sure what problems it will solve, but he argued that ChatGPT showed the first signs of what is possible. Then, with his next breath, he worried that the same technology could cause serious harm if it wound up in the hands of some authoritarian government.
  • Kelly Sims, a partner with the venture capital firm Thrive Capital who worked with Mr. Altman as a board adviser to OpenAI, said it was like he was constantly arguing with himself.
  • “In a single conversation,” she said, “he is both sides of the debate club.”
  • He takes pride in recognizing when a technology is about to reach exponential growth — and then riding that curve into the future.
  • he is also the product of a strange, sprawling online community that began to worry, around the same time Mr. Altman came to the Valley, that artificial intelligence would one day destroy the world. Called rationalists or effective altruists, members of this movement were instrumental in the creation of OpenAI.
  • Does it make sense to ride that curve if it could end in diaster? Mr. Altman is certainly determined to see how it all plays out.
  • “Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does. The other is that he likes power.”
  • “He has a natural ability to talk people into things,” Mr. Graham said. “If it isn’t inborn, it was at least fully developed before he was 20. I first met Sam when he was 19, and I remember thinking at the time: ‘So this is what Bill Gates must have been like.
  • poker taught Mr. Altman how to read people and evaluate risk.
  • It showed him “how to notice patterns in people over time, how to make decisions with very imperfect information, how to decide when it was worth pain, in a sense, to get more information,” he told me while strolling across his ranch in Napa. “It’s a great game.”
  • He believed, according to his younger brother Max, that he was one of the few people who could meaningfully change the world through A.I. research, as opposed to the many people who could do so through politics.
  • In 2019, just as OpenAI’s research was taking off, Mr. Altman grabbed the reins, stepping down as president of Y Combinator to concentrate on a company with fewer than 100 employees that was unsure how it would pay its bills.
  • Within a year, he had transformed OpenAI into a nonprofit with a for-profit arm. That way he could pursue the money it would need to build a machine that could do anything the human brain could do.
  • Mr. Brockman, OpenAI’s president, said Mr. Altman’s talent lies in understanding what people want. “He really tries to find the thing that matters most to a person — and then figure out how to give it to them,” Mr. Brockman told me. “That is the algorithm he uses over and over.”
  • Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, another lab intent on building artificial general intelligence.
  • “These are people who have left an indelible mark on the fabric of the tech industry and maybe the fabric of the world,” he said. “I think Sam is going to be one of those people.”
  • The trouble is, unlike the days when Apple, Microsoft and Meta were getting started, people are well aware of how technology can transform the world — and how dangerous it can be.
  • Mr. Scott of Microsoft believes that Mr. Altman will ultimately be discussed in the same breath as Steve Jobs, Bill Gates and Mark Zuckerberg.
  • The woman was the Canadian singer Grimes, Mr. Musk’s former partner, and the hat guy was Eliezer Yudkowsky, a self-described A.I. researcher who believes, perhaps more than anyone, that artificial intelligence could one day destroy humanity.
  • The selfie — snapped by Mr. Altman at a party his company was hosting — shows how close he is to this way of thinking. But he has his own views on the dangers of artificial intelligence.
  • In March, Mr. Altman tweeted out a selfie, bathed by a pale orange flash, that showed him smiling between a blond woman giving a peace sign and a bearded guy wearing a fedora.
  • He also helped spawn the vast online community of rationalists and effective altruists who are convinced that A.I. is an existential risk. This surprisingly influential group is represented by researchers inside many of the top A.I. labs, including OpenAI.
  • They don’t see this as hypocrisy: Many of them believe that because they understand the dangers clearer than anyone else, they are in the best position to build this technology.
  • Mr. Altman believes that effective altruists have played an important role in the rise of artificial intelligence, alerting the industry to the dangers. He also believes they exaggerate these dangers.
  • As OpenAI developed ChatGPT, many others, including Google and Meta, were building similar technology. But it was Mr. Altman and OpenAI that chose to share the technology with the world.
  • Many in the field have criticized the decision, arguing that this set off a race to release technology that gets things wrong, makes things up and could soon be used to rapidly spread disinformation.
  • Mr. Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them.
  • He told me that it would be a “very slow takeoff.”
  • When I asked Mr. Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.
  • If he’s wrong, he thinks he can make it up to humanity.
  • His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.
  • If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.
  • But as he once told me: “I feel like the A.G.I. can help with that.”
« First ‹ Previous 101 - 112 of 112
Showing 20 items per page