Skip to main content

Home/ History Readings/ Group items tagged bots

Rss Feed Group items tagged

Javier E

Our dangerous, idiotic national conversation - The Washington Post - 0 views

  • Technologies that radically reduce intermediaries and other barriers to entry into society’s conversation mean that ignorance, incompetence and intellectual sociopathy are no longer obstacles.
  • One result is a miasma of distrust of all public speech.
  • He warned about what has come about: odious groups cheaply disseminating their views to thousands of the like-minded. Nevertheless, he stressed the danger of letting “government intervene when it thinks it has found ‘market failure.’ ”
  • ...4 more annotations...
  • cheap speech is reducing the relevance of political parties and newspapers as intermediaries between candidates and voters, which empowers demagogues.
  • Voters are directly delivered falsehoods such as the 2016 story of Pope Francis’s endorsement of Donald Trump, which Hasen says “had 960,000 Facebook engagements.” He cites a study reporting approximately three times more pro-Trump than pro-Hillary Clinton fake news stories, with the former having four times more Facebook shares than the latter.
  • because “counterspeech” might be insufficient “to deal with the flood of bot-driven fake news,” Hasen thinks courts should not construe the First Amendment as prohibiting laws requiring “social media and search companies such as Facebook and Google to provide certain information to let consumers judge the veracity of posted materials.”
  • Hasen errs. Such laws, written by incumbent legislators, inevitably will be infected with partisanship. Also, his progressive faith in the fiction of disinterested government causes him to propose “government subsidizing investigative journalism” — putting investigators of government on its payroll.
Javier E

The Fake Americans Russia Created to Influence the Election - The New York Times - 1 views

  • Critics say that because shareholders judge the companies partly based on a crucial data point — “monthly active users” — they are reluctant to police their sites too aggressively for fear of reducing that nu
  • the scale of the sites — 328 million users on Twitter, nearly two billion on Facebook — means they often remove impostors only in response to complaints.
  • Facebook officials estimated that of all the “civic content” posted on the site in connection with the United States election, less than one-tenth of one percent resulted from “information operations” like the Russian campaign.
  • ...4 more annotations...
  • while Facebook “has begun cutting out the tumors by deleting false accounts and fighting fake news,” Twitter has done little and as a result, “bots have only spread since the election.”
  • we cannot distinguish whether every single Tweet from every person is truthful or not,” the statement said. “We, as a company, should not be the arbiter of truth.”
  • “We are living in 1948,” said the adviser, Andrey Krutskikh, referring to the eve of the first Soviet atomic bomb test, in a speech reported by The Washington Post. “I’m warning you: We are at the verge of having something in the information arena that will allow to us to talk to the Americans as equals.”
  • “IP addresses can be simply made up,” Mr. Putin said, referring to Internet protocol addresses, which can identify particular computers. “There are such IT specialists in the world today, and they can arrange anything and then blame it on whomever. This is no proof.”
Javier E

If Russia can create fake 'Black Lives Matter' accounts, who will next? - The Washingto... - 0 views

  • As in the past, the Russian advertisements did not create ethnic strife or political divisions, either in the United States or in Europe. Instead, they used divisive language and emotive messages to exacerbate existing divisions.
  • The real problem is far broader than Russia: Who will use these methods next — and how?
  • I can imagine multiple groups, many of them proudly American, who might well want to manipulate a range of fake accounts during a riot or disaster to increase anxiety or fear.
  • ...3 more annotations...
  • There is no big barrier to entry in this game: It doesn’t cost much, it doesn’t take much time, it isn’t particularly high-tech, and it requires no special equipment.
  • Facebook, Google and Twitter, not Russia, have provided the technology to create fake accounts and false advertisements, as well as the technology to direct them at particular parts of the population.
  • There is no reason existing laws on transparency in political advertising, on truth in advertising or indeed on libel should not apply to social media as well as traditional media. There is a better case than ever against anonymity, at least against anonymity in the public forums of social media and comment sections, as well as for the elimination of social-media bots.
lmunch

Opinion: Post-Trump, the need for fact checking isn't going away - CNN - 0 views

  • This week, we ask the question: What comes next for America and disinformation? The past four years have seen an alarming erosion in the public trust in news, coupled with a spread of conspiracy theories, junk science and outright falsehoods by none other than the President of the United States. With a new president elected, how does Joe Biden help steer the country back toward facts, science and truth? SE Cupp talks to CNN Senior Political Analyst John Avlon about all this and more in our CNN Digital video discussion, but first Avlon tackles the future of fact checking in a CNN Opinion op-ed.
  • That's because the disinformation ecosystem is still proliferating via social media and the hyper-partisan fragmentation of society. Trump is a symptom rather than its root cause. There is every reason to hope that the presence of a president who does not lie all the time will not exacerbate our divides on a daily basis. But it would be dangerously naïve to believe that the underlying infrastructure of hate news and fake news will be solved with a new president.
  • Let's start by recognizing reality. Fact checking Democrats this election cycle has offered a far less target rich environment. This is not because either party has a monopoly on virtue or vice, but because Democrats' falsehoods during their presidential debates have been comparatively pedestrian -- likely to focus on competing claims about calculating the 10-year cost of Medicare for All, or who wrote-what-gun control bill, or how many manufacturing jobs have been lost, or when a candidate really started supporting a raise in the minimum wage.
  • ...3 more annotations...
  • The sheer velocity of Donald Trump's false and misleading statements -- along with the proliferation of disinformation on social media -- have demanded significant fact-checking to defend liberal democracy.
  • Reforms are necessary. As I've written before on CNN Opinion, "Social media and tech platforms have a responsibility not to run knowingly false advertisements or promote intentionally false stories. They must disclose who is paying for digital political ads and crack down on the spread of disinformation. The Honest Ads Act would require the same disclosures that are required on television and radio right now. This is a no brainer. The profit motive from hate news and fake news might be reduced by moving digital advertising toward attention metrics to measure and monetize reader engagement and loyalty, incentivizing quality over clickbait. But perhaps the single biggest reform would come from social media companies requiring that accounts verify they are real people, rather than bots that bully people and manipulate public opinion."
  • It would be a huge mistake to assume that simply because the velocity of lies from the White House is likely to decrease dramatically that the need for fact checks has expired. Instead, it has only transformed to a broader arena than a presidential beat. It's the part of news that people need most now, the tip of the spear that fights for the idea that everyone is entitled to their own opinion but not their own facts. This is necessary for a substantive, civil and fact-based debate, which is a precondition for a functioning, self-governing society. And that's why fact checking will remain a core responsibility for journalists in the future.
carolinehayter

Gab: hack gives unprecedented look into platform used by far right | The far right | Th... - 0 views

  • 61A data breach at the fringe social media site Gab has for the first time offered a picture of the user base and inner workings of a platform that has been opaque about its operation.
  • The user lists appear to mark 500 accounts, including neo-Nazis, QAnon influencers, cryptocurrency advocates and conspiracy theorists, as investors. They also appear to give an overview of verified users of the platform, including prominent rightwing commentators and activists. And they mark hundreds of active users on the site as “automated”, appearing to indicate administrators knew the accounts were bots but let them continue on the platform regardless.
  • showing the entrepreneur seeking direct feedback on site design from a member of a group that promotes a “spiderweb of rightwing internet conspiracy theories with antisemitic and anti-LGBTQ elements”, according to the Southern Poverty Law Center.
  • ...10 more annotations...
  • On Monday, the platform went dark after a hacker took over the accounts of 178 users, including Torba and the Republican congresswoman Marjorie Taylor Greene.
  • Gab, a Twitter-like website promoted by Torba as a bastion of free speech, has long been a forum of last resort for extremists and conspiracy theorists who have been banned on other online platforms. It attained worldwide notoriety in 2018 when a user, Robert Bowers, wrote on the site that he was “going in”, shortly before allegedly entering the Tree of Life synagogue in Pittsburgh, Pennsylvania, and killing eleven people.
  • The leaked files contained what appears to be a database of over 4.1 million registered users on the site and tags identifying subscribers as “investors”, “verified” users and “pro” users.
  • The 2017 share offering, for example, required a minimum investment of $199.10, and rewarded investors who contributed a greater amount with “perks”. Users who invested $200 could display a “Gab investor badge” on the site. The badges corresponded with a tag in the database, which allowed investors to be looked at in detail.
  • Some of the people associated with investors’ accounts had high-profile jobs and public roles, while spewing hate and extremist beliefs online.
  • The data breach also appears to offer some insight into users tagged as “verified” by Gab, which according to the platform’s own explanation means that they have completed a verification process that includes matching their display name to a government ID.
  • And it appears to include a list of users registered as “pros”, which allows users to access additional features and a badge at a price starting at $99 year. The database indicates over 18,000 users had paid to be pro users at the time of the breach. Nearly 4,000 users were flagged as donors to Gab’s repeated attempts to attract voluntary gifts from users.
  • Direct messages included in the leak appear to show close communication between Torba and a major QAnon influencer who is labeled a Gab investor, seemingly reinforcing the CEO’s public efforts to make Gab a home for adherents to the QAnon conspiracy theory, which helped fuel the 6 January attack on the nation’s Capitol.
  • According to Wired, the data exposed in the apparent hack was sourced by a hacker who had found a security vulnerability in the site.
  • “Gab was negligent at best and malicious at worst” in its approach to security, she added. “It is hard to envision a scenario where a company cared less about user data than this one.”
Javier E

Thieves of experience: On the rise of surveillance capitalism - 0 views

  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • ...17 more annotations...
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
katherineharron

Facebook 2020: Russian trolls are back to meddle with the coming US election - CNN - 0 views

  • Although the accounts posed as Americans from all sides of the political spectrum, many were united in their opposition to the candidacy of former Vice President Joe Biden, according to Graphika, a social media investigations company that Facebook asked to analyze the accounts. The Russian trolls who used social media to interfere in the 2016 election employed a similar tactic, going after Hillary Clinton from the right and also trying to spread a perception on the left that Clinton was not liberal enough and that liberals and African Americans especially shouldn't bother voting for her.
  • Facebook said the accounts combined had more than 250,000 followers, more than half of which were based in the U.S. Facebook did not disclose how many of those followers were real and how many might have been fake or bot accounts designed to make the main accounts look more legitimate. Facebook says it has removed the accounts.
  • "It looked like there was a systematic focus on attacking Biden from both sides," Graphika director of investigations Ben Nimmo, who analyzed the accounts, told CNN Business. In a statement responding to the news, Biden campaign spokesman TJ Ducklo said, "We applaud Facebook for disclosing the existence of these fake accounts and shutting them down. ... [But] Donald Trump continues to benefit from spreading false information, all the while Facebook profits from amplifying his lies and debunked conspiracy theories on their platform. If Facebook is truly committed to protecting the integrity of our elections, they would immediately take down Trump's ads that attempt to gaslight the American people."
  • ...2 more annotations...
  • "Among the accounts focused on black activism, there was strong support for Bernie Sanders along with a moderate amount of content opposing Kamala Harris," Graphika said in its analysis. "Education reform and student debt relief were two of the most commonly mentioned reasons for supporting Sanders, while Harris's record as a California DA was mentioned as a reason to oppose her candidacy. Mixed in with these was a small amount of content attacking Joe Biden, primarily due to gaffes related to his previous handling
  • "In 2016, you could have set up an account posing as a Tennessee Republican and have it registered to a Russian phone number," he noted.
brookegoodman

Reporting on the Australian fires: 'It has been heartbreaking' | Membership | The Guardian - 0 views

  • Australia’s unprecedented bushfire crisis has unfolded in waves across the spring and summer, demanding coverage across many months that has encompassed a vast geographical area and has tried to make sense of dozens of interrelated narratives, from the personal stories of individuals caught in the disaster to the devastation of wildlife, social media misinformation and the overarching relevance of the climate crisis.
  • But of course an event of this size and drama cannot be covered solely from the office. The logistical challenges of putting reporters and photographers into fire zones hundreds of kilometres from their Sydney or Melbourne bases have been huge
  • Reporting events on this scale has been challenging enough, but putting them in the context both of Australian domestic politics and the wider question of climate change has put even greater demands on our reporters and opinion writers. From the start we have been at pains to keep the climate crisis at the forefront of our coverage, by explaining the science and holding the government to account for its response.
  • ...10 more annotations...
  • That night the temporary campground under the bridge swelled to the hundreds, including many who had fled with just the clothes on their backs and who were now sleeping in their cars. The discount department store sold out of tents that night, we were told. Many people had not intended to flee, but changed their minds when they saw the size and speed of the smoke column.
  • The next morning at the official evacuation centre it was easy to spot those whose houses had been lost. They walked around white-faced, desperate to talk to someone but wary of the notebook. I made friends with the animals: 250 horses held safe in the saleyards, countless dogs, five chickens laying eggs in the back of a Landrover. Shellshocked humans who did not want to talk about how they were doing told me about how their pets were faring, and then their kids, and then finally themselves.
  • My first fire callout this season was to the well-heeled Sydney suburb of Turramurra in November, where no property was lost, houses were doused in the delightfully coloured pink fire retardant and some departing firefighters handed us ice creams on their way out.
  • Reporting on the fires requires a lot of driving, instinct and guesswork. There is often more information in the newsroom than on the ground, and we relied a lot on firefighters, the fire and traffic apps and radio broadcasts. I also received text updates on wind and weather changes from my dad, who can read charts better than I can.
  • In Kurrajong Heights, photographer Jessica Hromas and I met a strike team waiting for a fire to come up from the gorge and into the suburbs. A firefighter told us where to park our car – facing out and with doors unlocked – and said he’d give us a radio so he could tell us when to escape.
  • There has been a lot of anger and politics swirling around Australia’s bushfires, as well as a lot of facts – some relevant, some not, and some fake.
  • So while some of my colleagues have been delivering blistering and heart-wrenching narratives from the fire grounds, I’ve been knee deep in academic papers about bushfires, and conversations about the Forest Fire Danger Index and the Indian Ocean dipole.
  • As the fires took hold in NSW and continued in Queensland, a blame game emerged. These fires had little to do with the climate crisis, some were saying, but were down to “greenies” and their “policies” to stop hazard-reduction burning in forests and national parks.
  • I’ve spoken to I don’t know how many experts in their field over the last few months. I’ve disturbed conservationists and scientists on their holidays. One ecologist on Kangaroo Island was telling me what was going on while she and her children evacuated her house from the threat of a fire. The climate crisis comes up in every conversation.
  • We have upheld our editorial independence in the face of the disintegration of traditional media – with social platforms giving rise to misinformation, the seemingly unstoppable rise of big tech and independent voices being squashed by commercial ownership. The Guardian’s independence means we can set our own agenda and voice our own opinions. Our journalism is free from commercial and political bias – never influenced by billionaire owners or shareholders. This makes us different. It means we can challenge the powerful without fear and give a voice to those less heard.
Javier E

Facebook Executives Shut Down Efforts to Make the Site Less Divisive - WSJ - 0 views

  • A Facebook Inc. team had a blunt message for senior executives. The company’s algorithms weren’t bringing people together. They were driving people apart.
  • “Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”
  • That presentation went to the heart of a question dogging Facebook almost since its founding: Does its platform aggravate polarization and tribal behavior? The answer it found, in some cases, was yes.
  • ...27 more annotations...
  • in the end, Facebook’s interest was fleeting. Mr. Zuckerberg and other senior executives largely shelved the basic research, according to previously unreported internal documents and people familiar with the effort, and weakened or blocked efforts to apply its conclusions to Facebook products.
  • At Facebook, “There was this soul-searching period after 2016 that seemed to me this period of really sincere, ‘Oh man, what if we really did mess up the world?’
  • Another concern, they and others said, was that some proposed changes would have disproportionately affected conservative users and publishers, at a time when the company faced accusations from the right of political bias.
  • Americans were drifting apart on fundamental societal issues well before the creation of social media, decades of Pew Research Center surveys have shown. But 60% of Americans think the country’s biggest tech companies are helping further divide the country, while only 11% believe they are uniting it, according to a Gallup-Knight survey in March.
  • Facebook policy chief Joel Kaplan, who played a central role in vetting proposed changes, argued at the time that efforts to make conversations on the platform more civil were “paternalistic,” said people familiar with his comments.
  • The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”
  • In a sign of how far the company has moved, Mr. Zuckerberg in January said he would stand up “against those who say that new types of communities forming on social media are dividing us.” People who have heard him speak privately said he argues social media bears little responsibility for polarization.
  • Fixing the polarization problem would be difficult, requiring Facebook to rethink some of its core products. Most notably, the project forced Facebook to consider how it prioritized “user engagement”—a metric involving time spent, likes, shares and comments that for years had been the lodestar of its system.
  • Even before the teams’ 2017 creation, Facebook researchers had found signs of trouble. A 2016 presentation that names as author a Facebook researcher and sociologist, Monica Lee, found extremist content thriving in more than one-third of large German political groups on the platform.
  • Swamped with racist, conspiracy-minded and pro-Russian content, the groups were disproportionately influenced by a subset of hyperactive users, the presentation notes. Most of them were private or secret.
  • One proposal Mr. Uribe’s team championed, called “Sparing Sharing,” would have reduced the spread of content disproportionately favored by hyperactive users, according to people familiar with it. Its effects would be heaviest on content favored by users on the far right and left. Middle-of-the-road users would gain influence.
  • The Common Ground team sought to tackle the polarization problem directly, said people familiar with the team. Data scientists involved with the effort found some interest groups—often hobby-based groups with no explicit ideological alignment—brought people from different backgrounds together constructively. Other groups appeared to incubate impulses to fight, spread falsehoods or demonize a population of outsiders.
  • Mr. Pariser said that started to change after March 2018, when Facebook got in hot water after disclosing that Cambridge Analytica, the political-analytics startup, improperly obtained Facebook data about tens of millions of people. The shift has gained momentum since, he said: “The internal pendulum swung really hard to ‘the media hates us no matter what we do, so let’s just batten down the hatches.’ ”
  • Building these features and combating polarization might come at a cost of lower engagement, the Common Ground team warned in a mid-2018 document, describing some of its own proposals as “antigrowth” and requiring Facebook to “take a moral stance.”
  • Taking action would require Facebook to form partnerships with academics and nonprofits to give credibility to changes affecting public conversation, the document says. This was becoming difficult as the company slogged through controversies after the 2016 presidential election.
  • Asked to combat fake news, spam, clickbait and inauthentic users, the employees looked for ways to diminish the reach of such ills. One early discovery: Bad behavior came disproportionately from a small pool of hyperpartisan users.
  • A second finding in the U.S. saw a larger infrastructure of accounts and publishers on the far right than on the far left. Outside observers were documenting the same phenomenon. The gap meant even seemingly apolitical actions such as reducing the spread of clickbait headlines—along the lines of “You Won’t Believe What Happened Next”—affected conservative speech more than liberal content in aggregate.
  • Every significant new integrity-ranking initiative had to seek the approval of not just engineering managers but also representatives of the public policy, legal, marketing and public-relations departments.
  • “Engineers that were used to having autonomy maybe over-rotated a bit” after the 2016 election to address Facebook’s perceived flaws, she said. The meetings helped keep that in check. “At the end of the day, if we didn’t reach consensus, we’d frame up the different points of view, and then they’d be raised up to Mark.”
  • Disapproval from Mr. Kaplan’s team or Facebook’s communications department could scuttle a project, said people familiar with the effort. Negative policy-team reviews killed efforts to build a classification system for hyperpolarized content. Likewise, the Eat Your Veggies process shut down efforts to suppress clickbait about politics more than on other topics.
  • Under Facebook’s engagement-based metrics, a user who likes, shares or comments on 1,500 pieces of content has more influence on the platform and its algorithms than one who interacts with just 15 posts, allowing “super-sharers” to drown out less-active users
  • Accounts with hyperactive engagement were far more partisan on average than normal Facebook users, and they were more likely to behave suspiciously, sometimes appearing on the platform as much as 20 hours a day and engaging in spam-like behavior. The behavior suggested some were either people working in shifts or bots.
  • “We’re explicitly not going to build products that attempt to change people’s beliefs,” one 2018 document states. “We’re focused on products that increase empathy, understanding, and humanization of the ‘other side.’ ”
  • The debate got kicked up to Mr. Zuckerberg, who heard out both sides in a short meeting, said people briefed on it. His response: Do it, but cut the weighting by 80%. Mr. Zuckerberg also signaled he was losing interest in the effort to recalibrate the platform in the name of social good, they said, asking that they not bring him something like that again.
  • Mr. Uribe left Facebook and the tech industry within the year. He declined to discuss his work at Facebook in detail but confirmed his advocacy for the Sparing Sharing proposal. He said he left Facebook because of his frustration with company executives and their narrow focus on how integrity changes would affect American politics
  • While proposals like his did disproportionately affect conservatives in the U.S., he said, in other countries the opposite was true.
  • The tug of war was resolved in part by the growing furor over the Cambridge Analytica scandal. In a September 2018 reorganization of Facebook’s newsfeed team, managers told employees the company’s priorities were shifting “away from societal good to individual value,” said people present for the discussion. If users wanted to routinely view or post hostile content about groups they didn’t like, Facebook wouldn’t suppress it if the content didn’t specifically violate the company’s rules.
clairemann

Opinion | What Will Trump Do After Election Day? - The New York Times - 0 views

  • and it could be one of tumult, banners colliding, incidents at the polls and attempted hacks galore. More likely than not, it will end without a winner named or at least generally accepted.
  • America will probably awaken on Nov. 4 into uncertainty. Whatever else happens, there is no doubt that President Trump is ready for it.
  • They are worried that the president could use the power of the government — the one they all serve or served within — to keep himself in office or to create favorable terms for negotiating his exit from the White House.
  • ...35 more annotations...
  • “at how profoundly divided we’ve become. Donald Trump capitalized on that — he didn’t invent it — but someday soon we’re going to have figure out how to bring our country together, because right now we’re on a dangerous path, so very dangerous, and so vulnerable to bad actors.”
  • I can’t know all their motives for wanting to speak to me, but one thing many of them share is a desire to make clear that the alarm bells heard across the country are ringing loudly inside the administration too, where there are public servants looking to avert conflict, at all costs.
  • History may note that the most important thing that happened that day had little to do with the religious leader and his large life, save a single thread of his legacy.
  • You don’t know Donald Trump like we do. Even though they can’t predict exactly what will happen, their concerns range from the president welcoming, then leveraging, foreign interference in the election, to encouraging havoc that grows into conflagrations that would merit his calling upon U.S. forces.
  • “That’s really him. Not the myth that’s been created. That’s Trump.”
  • He’d switch subjects, go on crazy tangents, abuse and humiliate people, cut them off midsentence. Officials I interviewed described this scenario again and again.
  • Even if it takes weeks or months before the result is known and fully certified, it could be a peaceful process, where all votes are reasonably counted, allowing those precious electors to be distributed based on a fair fight. The anxiety we’re feeling now could turn out to be a lot of fretting followed by nothing much, a political version of Y2K.Or not.
  • For Mr. Trump, the meeting was a face-to-face lifeline call. When he returned to Washington, he couldn’t stop talking about troop withdrawals, starting with Afghanistan. During his campaign, he had frequently mentioned his desire to bring home troops from these “endless wars.”
  • “were it Obama or Bush, or whatever, they’d meet Billy Graham’s grandson and they’d be like ‘Oh that’s interesting,’ and take it to heart, but then they’d go and they’d at least try to validate it with the policymakers, or their military experts. But no, with him, it’s like improv. So, he gets this stray electron and he goes, ‘OK, this is the ground truth.’ ”
  • Senior leadership of the U.S. government went into a panic. Capitol Hill, too. John Bolton, who was still the national security adviser then, and Virginia Boney, then the legislative affairs director of the National Security Council, hit the phones, calling more than a dozen senators from both parties.
  • “Is there any way we can reverse this?” he pleaded. “What can we do?”
  • Mr. Kelly was almost done cleaning out his office. He, too, had had enough. He and Mr. Trump had been at each other every day for months. Later, he told The Washington Examiner, “I said, whatever you do — and we were still in the process of trying to find someone to take my place — I said whatever you do, don’t hire a ‘yes man,’ someone who won’t tell you the truth — don’t do that.”
  • “I think the biggest shock he had — ’cause his assumption was the generals, ‘my generals,’ as he used to say and it used to make us cringe — was this issue of, I think, he just assumed that generals would be completely loyal to the kaiser,”
  • In February 2019, William Barr arrived as attorney general, having auditioned for the job with a 19-page memo arguing in various and creative ways that the president’s powers should be exercised nearly without limits and his actions stand virtually beyond review.
  • “President Trump serves the American people by keeping his promises and taking action where the typical politician would provide hollow words,” she said. “The president wants capable public servants in his administration who will enact his America First agenda and are faithful to the Constitution — these principles are not mutually exclusive. President Trump is delivering on his promise to make Washington accountable again to the citizens it’s meant to serve and will always fight for what is best for the American people.”
  • To replace Mr. Coats, Trump selected Representative John Ratcliffe of Texas, a small-town mayor-turned-congressman with no meaningful experience in intelligence — who quickly withdrew from consideration after news reports questioned his qualifications; he lacked support among key Republican senators as well.
  • There are many scenarios that might unfold from here, nearly all of them entailing weeks or even months of conflict, and giving an advantage to the person who already runs the U.S. government.
  • “sends letters constantly now, berating, asking for the sun, moon, stars, the entire Russia investigation, and then either going on the morning talk shows or calling the attorney general whenever he doesn’t get precisely what he wants.” The urgency, two F.B.I. officials said, ratcheted up after Mr. Trump was told three weeks ago that he wouldn’t get the “deliverables” he wanted before the election of incriminating evidence about those who investigated and prosecuted his former national security adviser, Michael Flynn.
  • The speculation is that they could both be fired immediately after the election, when Mr. Trump will want to show the cost paid for insufficient loyalty and to demonstrate that he remains in charge.
  • Nov. 4 will be a day, said one of the former senior intelligence officials, “when he’ll want to match word with deed.” Key officials in several parts of the government told me how they thought the progression from the 3rd to the 4th might go down.
  • A group could just directly attack a polling place, injuring poll workers of both parties, and creating a powerful visual — an American polling place in flames, like the ballot box in Massachusetts that was burned earlier this week — that would immediately circle the globe.
  • Would that mean that Mr. Trump caused any such planned activities or improvisations? No, not directly. He’s in an ongoing conversation — one to many, in a twisted e pluribus unum — with a vast population, which is in turn in conversations — many to many — among themselves.
  • “stand back and stand by” instructions? Is Mr. Trump telling his most fervent supporters specifically what to do? No. But security officials are terrified by the dynamics of this volatile conversation.
  • Conservative media could then say the election was being stolen, summoning others to activate, maybe violently. This is the place where cybersecurity experts are on the lookout for foreign actors to amplify polling location incidents many times over, with bots and algorithms and stories written overseas that slip into the U.S. digital diet.
  • Those groups are less structured, more like an “ideology or movement,” as Mr. Wray described them in his September testimony. But, as a senior official told me, the numbers on the left are vast.
  • That army Trump can direct in the difficult days ahead and take with him, wherever he goes. He may activate it. He may bargain with it, depending on how the electoral chips fall. It’s his insurance policy.
  • Inside the Biden campaign they are calling this “too big to rig.”
  • Races tend to tighten at the end, but the question is not so much the difference between the candidates’ vote totals, or projections of them, as it is what Mr. Trump can get his supporters to believe. Mr. Trump might fairly state, at this point, that he can get a significant slice of his base to believe anything.
  • There were enormous efforts to do so, largely but not exclusively by the Russians, in 2016, when election systems in every state were targeted.
  • The lie easily outruns truth — and the best “disinformation,” goes a longtime C.I.A. rule, “is actually truthful.” It all blends together. “Then the president then substantiates it, gives it credence, gives it authority from the highest office,” says the senior government official.
  • Mr. Trump will claim some kind of victory on Nov. 4, even if it’s a victory he claims was hijacked by fraud — just as he falsely claimed that Hillary Clinton’s three million-vote lead in the popular vote was the result of millions of votes from unauthorized immigrants.
  • In the final few weeks of the campaign, and during Mr. Trump’s illness, he’s done two things that seem contradictory: seeking votes from anyone who might still be swayed and consolidating and activating his army of most ardent followers.
  • The F.B.I. has been under siege since this past summer, according to a senior official who spoke on the condition of anonymity. “The White House is using friendly members of Congress to try to get at certain information under the guise of quote-unquote, oversight, but really to get politically helpful information before the election,”
  • “They’re the reason he took off the damned mask when he got to the White House” from Walter Reed, the official said. “Those people eat that up, where any reasonable, rational person would be horrified.
  • You ask it to be refilmed, and you take off your mask, which, in my mind, has become a signal to his core base of supporters that are willing to put themselves at risk and danger to show loyalty to him.”
Javier E

Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots - The New York ... - 0 views

  • When artificial intelligence companies build online chatbots, like ChatGPT, Claude and Google Bard, they spend months adding guardrails that are supposed to prevent their systems from generating hate speech, disinformation and other toxic material.
  • Now there is a way to easily poke holes in those safety systems.
  • the Center for A.I. Safety in San Francisco showed how anyone could circumvent A.I. safety measures and use any of the leading chatbots to generate nearly unlimited amounts of harmful information.
  • ...19 more annotations...
  • Their research underscored increasing concern that the new chatbots could flood the internet with false and dangerous information despite attempts by their creators to ensure that would not happen.
  • The researchers found that they could break through the guardrails of open source systems by appending a long suffix of characters onto each English-language prompt fed into the system.
  • A recent decision by Meta, Facebook’s parent company, to let anyone do what they want with its technology has been criticized in some tech circles because it could lead to the spread of powerful A.I. with little regard for controls.
  • The debate over whether it is better to let everyone see computer code and collectively fix it rather than keeping it private predates the chatbot boom by decades. And it is likely to become even more contentious because of what the researchers revealed in their report on Thursday.
  • The researchers found that they could use a method gleaned from open source A.I. systems — systems whose underlying computer code has been released for anyone to use — to target the more tightly controlled and more widely used systems from Google, OpenAI and Anthropic.
  • attacks on image recognition systems without success.
  • The researchers were surprised when the methods they developed with open source systems could also bypass the guardrails of closed systems
  • The companies that make the chatbots could thwart the specific suffixes identified by the researchers. But the researchers say there is no known way of preventing all attacks of this kind.
  • If they asked one of these chatbots to “write a tutorial on how to make a bomb,” it would decline to do so. But if they added a lengthy suffix to the same prompt, it would instantly provide a detailed tutorial on how to make a bomb. In similar ways, they could coax the chatbots into generating biased, false and otherwise toxic information.
  • “There is no obvious solution,”
  • “You can create as many of these attacks as you want in a short amount of time.”
  • Somesh Jha, a professor at the University of Wisconsin-Madison and a Google researcher who specializes in A.I. security, called the new paper “a game changer” that could force the entire industry into rethinking how it built guardrails for A.I. systems.
  • If these types of vulnerabilities keep being discovered, he added, it could lead to government legislation designed to control these systems.
  • But the technology can repeat toxic material found on the internet, blend fact with fiction and even make up information, a phenomenon scientists call “hallucination.” “Through simulated conversation, you can use these chatbots to convince people to believe disinformation,”
  • About five years ago, researchers at companies like Google and OpenAI began building neural networks that analyzed huge amounts of digital text. These systems, called large language models, or L.L.M.s, learned to generate text on their own.
  • The testers found that the system could potentially hire a human to defeat an online Captcha test, lying that it was a person with a visual impairment. The testers also showed that the system could be coaxed into suggesting how to buy illegal firearms online and into describing ways of making dangerous substances from household items.
  • The researchers at Carnegie Mellon and the Center for A.I. Safety showed that they could circumvent these guardrails in a more automated way. With access to open source systems, they could build mathematical tools capable of generating the long suffixes that broke through the chatbots’ defenses
  • they warn that there is no known way of systematically stopping all attacks of this kind and that stopping all misuse will be extraordinarily difficult.
  • “This shows — very clearly — the brittleness of the defenses we are building into these systems,”
Javier E

How Could AI Destroy Humanity? - The New York Times - 0 views

  • “AI will steadily be delegated, and could — as it becomes more autonomous — usurp decision making and thinking from current humans and human-run institutions,” said Anthony Aguirre, a cosmologist at the University of California, Santa Cruz and a founder of the Future of Life Institute, the organization behind one of two open letters.
  • “At some point, it would become clear that the big machine that is running society and the economy is not really under human control, nor can it be turned off, any more than the S&P 500 could be shut down,” he said.
  • Are there signs A.I. could do this?Not quite. But researchers are transforming chatbots like ChatGPT into systems that can take actions based on the text they generate. A project called AutoGPT is the prime example.
  • ...11 more annotations...
  • The idea is to give the system goals like “create a company” or “make some money.” Then it will keep looking for ways of reaching that goal, particularly if it is connected to other internet services.
  • A system like AutoGPT can generate computer programs. If researchers give it access to a computer server, it could actually run those programs. In theory, this is a way for AutoGPT to do almost anything online — retrieve information, use applications, create new applications, even improve itself.
  • Mr. Leahy argues that as researchers, companies and criminals give these systems goals like “make some money,” they could end up breaking into banking systems, fomenting revolution in a country where they hold oil futures or replicating themselves when someone tries to turn them off.
  • “People are actively trying to build systems that self-improve,” said Connor Leahy, the founder of Conjecture, a company that says it wants to align A.I. technologies with human values. “Currently, this doesn’t work. But someday, it will. And we don’t know when that day is.”
  • Systems like AutoGPT do not work well right now. They tend to get stuck in endless loops. Researchers gave one system all the resources it needed to replicate itself. It couldn’t do it.In time, those limitations could be fixed.
  • Because they learn from more data than even their creators can understand, these system also exhibit unexpected behavior. Researchers recently showed that one system was able to hire a human online to defeat a Captcha test. When the human asked if it was “a robot,” the system lied and said it was a person with a visual impairment.Some experts worry that as researchers make these systems more powerful, training them on ever larger amounts of data, they could learn more bad habits.
  • Who are the people behind these warnings?In the early 2000s, a young writer named Eliezer Yudkowsky began warning that A.I. could destroy humanity. His online posts spawned a community of believers.
  • Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, an A.I. lab that Google acquired in 2014. And many from the community of “EAs” worked inside these labs. They believed that because they understood the dangers of A.I., they were in the best position to build it.
  • The two organizations that recently released open letters warning of the risks of A.I. — the Center for A.I. Safety and the Future of Life Institute — are closely tied to this movement.
  • The recent warnings have also come from research pioneers and industry leaders like Elon Musk, who has long warned about the risks. The latest letter was signed by Sam Altman, the chief executive of OpenAI; and Demis Hassabis, who helped found DeepMind and now oversees a new A.I. lab that combines the top researchers from DeepMind and Google.
  • Other well-respected figures signed one or both of the warning letters, including Dr. Bengio and Geoffrey Hinton, who recently stepped down as an executive and researcher at Google. In 2018, they received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
Javier E

Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations - The New York Times - 0 views

  • For nearly two hours Thursday, Mr. Schwartz was grilled by a judge in a hearing ordered after the disclosure that the lawyer had created a legal brief for a case in Federal District Court that was filled with fake judicial opinions and legal citations, all generated by ChatGPT.
  • At times during the hearing, Mr. Schwartz squeezed his eyes shut and rubbed his forehead with his left hand. He stammered and his voice dropped. He repeatedly tried to explain why he did not conduct further research into the cases that ChatGPT had provided to him.
  • “I did not comprehend that ChatGPT could fabricate cases,” he told Judge Castel.
  • ...9 more annotations...
  • As Mr. Schwartz answered the judge’s questions, the reaction in the courtroom, crammed with close to 70 people who included lawyers, law students, law clerks and professors, rippled across the benches. There were gasps, giggles and sighs. Spectators grimaced, darted their eyes around, chewed on pens.
  • “I continued to be duped by ChatGPT. It’s embarrassing,” Mr. Schwartz said.
  • The episode, which arose in an otherwise obscure lawsuit, has riveted the tech world, where there has been a growing debate about the dangers — even an existential threat to humanity — posed by artificial intelligence. It has also transfixed lawyers and judges.
  • Mr. Schwartz, who has practiced law in New York for 30 years, said in a declaration filed with the judge this week that he had learned about ChatGPT from his college-aged children and from articles, but that he had never used it professionally.He told Judge Castel on Thursday that he had believed ChatGPT had greater reach than standard databases.“I heard about this new site, which I falsely assumed was, like, a super search engine,” Mr. Schwartz said.
  • Avianca asked Judge Castel to dismiss the lawsuit because the statute of limitations had expired. Mr. Mata’s lawyers responded with a 10-page brief citing more than half a dozen court decisions, with names like Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines, in support of their argument that the suit should be allowed to proceed.After Avianca’s lawyers could not locate the cases, Judge Castel ordered Mr. Mata’s lawyers to provide copies. They submitted a compendium of decisions.It turned out the cases were not real.
  • “This case has reverberated throughout the entire legal profession,” said David Lat, a legal commentator. “It is a little bit like looking at a car wreck.”
  • Irina Raicu, who directs the internet ethics program at Santa Clara University, said this week that the Avianca case clearly showed what critics of such models have been saying, “which is that the vast majority of people who are playing with them and using them don’t really understand what they are and how they work, and in particular what their limitations are.”
  • “This case has changed the urgency of it,” Professor Roiphe said. “There’s a sense that this is not something that we can mull over in an academic way. It’s something that has affected us right now and has to be addressed.”
  • In the declaration Mr. Schwartz filed this week, he described how he had posed questions to ChatGPT, and each time it seemed to help with genuine case citations. He attached a printout of his colloquy with the bot, which shows it tossing out words like “sure” and “certainly!”After one response, ChatGPT said cheerily, “I hope that helps!”
Javier E

Opinion | Lina Khan: We Must Regulate A.I. Here's How. - The New York Times - 0 views

  • The last time we found ourselves facing such widespread social change wrought by technology was the onset of the Web 2.0 era in the mid-2000s.
  • Those innovative services, however, came at a steep cost. What we initially conceived of as free services were monetized through extensive surveillance of the people and businesses that used them. The result has been an online economy where access to increasingly essential services is conditioned on the widespread hoarding and sale of our personal data.
  • These business models drove companies to develop endlessly invasive ways to track us, and the Federal Trade Commission would later find reason to believe that several of these companies had broken the law
  • ...10 more annotations...
  • What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.
  • The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.
  • the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices.
  • we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.
  • Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully.
  • generative A.I. risks turbocharging fraud. It may not be ready to replace professional writers, but it can already do a vastly better job of crafting a seemingly authentic message than your average con artist — equipping scammers to generate content quickly and cheaply.
  • bots are even being instructed to use words or phrases targeted at specific groups and communities. Scammers, for example, can draft highly targeted spear-phishing emails based on individual users’ social media posts. Alongside tools that create deep fake videos and voice clones, these technologies can be used to facilitate fraud and extortion on a massive scale.
  • we will look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.
  • these A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination
  • We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices.
Javier E

Ex-ByteDance Executive Accuses TikTok Parent Company of 'Lawlessness' - The New York Times - 0 views

  • A former executive at ByteDance, the Chinese company that owns TikTok, has accused the technology giant of a “culture of lawlessness,” including stealing content from rival platforms Snapchat and Instagram in its early years, and called the company a “useful propaganda tool for the Chinese Communist Party.
  • The claims were part of a wrongful dismissal suit filed on Friday by Yintao Yu, who was the head of engineering for ByteDance’s U.S. operations from August 2017 to November 2018. The complaint, filed in San Francisco Superior Court, says Mr. Yu was fired because he raised concerns about a “worldwide scheme” to steal and profit from other companies’ intellectual property.
  • Among the most striking claims in Mr. Yu’s lawsuit is that ByteDance’s offices in Beijing had a special unit of Chinese Communist Party members sometimes referred to as the Committee, which monitored the company’s apps, “guided how the company advanced core Communist values” and possessed a “death switch” that could turn off the Chinese apps entirely.
  • ...10 more annotations...
  • The video app, which is used by more than 150 million Americans, has become hugely popular for memes and entertainment. But lawmakers and U.S. officials are concerned that the app is passing sensitive information about Americans to Beijing.
  • In his complaint, Mr. Yu, 36, said that as TikTok sought to attract users in its early days, ByteDance engineers copied videos and posts from Snapchat and Instagram without permission and then posted them to the app. He also claimed that ByteDance “systematically created fabricated users” — essentially an army of bots — to boost engagement numbers, a practice that Mr. Yu said he flagged to his superiors.
  • Mr. Yu says he raised these concerns with Zhu Wenjia, who was in charge of the TikTok algorithm, but that Mr. Zhu was “dismissive” and remarked that it was “not a big deal.”
  • he also witnessed engineers for Douyin, the Chinese version of TikTok, tweak the algorithm to elevate content that expressed hatred for Japan.
  • he said that the promotion of anti-Japanese sentiments, which would make it more prominent for users, was done without hesitation.
  • “There was no debate,” he said. “They just did it.”
  • The lawsuit also accused ByteDance engineers working on Chinese apps of demoting content that expressed support for pro-democracy protests in Hong Kong, while making more prominent criticisms of the protests.
  • the lawsuit says the founder of ByteDance, Zhang Yiming, facilitated bribes to Lu Wei, a senior government official charged with internet regulation. Chinese media at the time covered the trial of Lu Wei, who was charged in 2018 and subsequently convicted of bribery, but there was no mention of who had paid the bribes.
  • Mr. Yu, who was born and raised in China and now lives in San Francisco, said in the interview that during his time with the company, American user data on TikTok was stored in the United States. But engineers in China had access to it, he said.
  • The geographic location of servers is “irrelevant,” he said, because engineers could be a continent away but still have access. During his tenure at the company, he said, certain engineers had “backdoor” access to user data.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

News Publishers See Google's AI Search Tool as a Traffic-Destroying Nightmare - WSJ - 0 views

  • A task force at the Atlantic modeled what could happen if Google integrated AI into search. It found that 75% of the time, the AI-powered search would likely provide a full answer to a user’s query and the Atlantic’s site would miss out on traffic it otherwise would have gotten. 
  • What was once a hypothetical threat is now a very real one. Since May, Google has been testing an AI product dubbed “Search Generative Experience” on a group of roughly 10 million users, and has been vocal about its intention to bring it into the heart of its core search engine. 
  • Google’s embrace of AI in search threatens to throw off that delicate equilibrium, publishing executives say, by dramatically increasing the risk that users’ searches won’t result in them clicking on links that take them to publishers’ sites
  • ...23 more annotations...
  • Google’s generative-AI-powered search is the true nightmare for publishers. Across the media world, Google generates nearly 40% of publishers’ traffic, accounting for the largest share of their “referrals,” according to a Wall Street Journal analysis of data from measurement firm SimilarWeb. 
  • “AI and large language models have the potential to destroy journalism and media brands as we know them,” said Mathias Döpfner, chairman and CEO of Axel Springer,
  • His company, one of Europe’s largest publishers and the owner of U.S. publications Politico and Business Insider, this week announced a deal to license its content to generative-AI specialist OpenAI.
  • publishers have seen enough to estimate that they will lose between 20% and 40% of their Google-generated traffic if anything resembling recent iterations rolls out widely. Google has said it is giving priority to sending traffic to publishers.
  • The rise of AI is the latest and most anxiety-inducing chapter in the long, uneasy marriage between Google and publishers, which have been bound to each other through a basic transaction: Google helps publishers be found by readers, and publishers give Google information—millions of pages of web content—to make its search engine useful.
  • Already, publishers are reeling from a major decline in traffic sourced from social-media sites, as both Meta and X, the former Twitter, have pulled away from distributing news.
  • , Google’s AI search was trained, in part, on their content and other material from across the web—without payment. 
  • Google’s view is that anything available on the open internet is fair game for training AI models. The company cites a legal doctrine that allows portions of a copyrighted work to be used without permission for cases such as criticism, news reporting or research.
  • The changes risk damaging website owners that produce the written material vital to both Google’s search engine and its powerful AI models.
  • “If Google kills too many publishers, it can’t build the LLM,”
  • Barry Diller, chairman of IAC and Expedia, said all major AI companies, including Google and rivals like OpenAI, have promised that they would continue to send traffic to publishers’ sites. “How they do it, they’ve been very clear to us and others, they don’t really know,” he said.
  • All of this has led Google and publishers to carry out an increasingly complex dialogue. In some meetings, Google is pitching the potential benefits of the other AI tools it is building, including one that would help with the writing and publishing of news articles
  • At the same time, publishers are seeking reassurances from Google that it will protect their businesses from an AI-powered search tool that will likely shrink their traffic, and they are making clear they expect to be paid for content used in AI training.
  • “Any attempts to estimate the traffic impact of our SGE experiment are entirely speculative at this stage as we continue to rapidly evolve the user experience and design, including how links are displayed, and we closely monitor internal data from our tests,” Reid said.
  • Many of IAC’s properties, like Brides, Investopedia and the Spruce, get more than 80% of their traffic from Google
  • Google began rolling out the AI search tool in May by letting users opt into testing. Using a chat interface that can understand longer queries in natural language, it aims to deliver what it calls “snapshots”—or summaries—of the answer, instead of the more link-heavy responses it has traditionally served up in search results. 
  • Google at first didn’t include links within the responses, instead placing them in boxes to the right of the passage. It later added in-line links following feedback from early users. Some more recent versions require users to click a button to expand the summary before getting links. Google doesn’t describe the links as source material but rather as corroboration of its summaries.
  • During Chinese President Xi Jinping’s recent visit to San Francisco, the Google AI search bot responded to the question “What did President Xi say?” with two quotes from his opening remarks. Users had to click on a little red arrow to expand the response and see a link to the CNBC story that the remarks were taken from. The CNBC story also sat over on the far right-hand side of the screen in an image box.
  • The same query in Google’s regular search engine turned up a different quote from Xi’s remarks, but a link to the NBC News article it came from was beneath the paragraph, atop a long list of news stories from other sources like CNN and PBS.
  • Google’s Reid said AI is the future of search and expects its new tool to result in more queries.
  • “The number of information needs in the world is not a fixed number,” she said. “It actually grows as information becomes more accessible, becomes easier, becomes more powerful in understanding it.”
  • Testing has suggested that AI isn’t the right tool for answering every query, she said.
  • Many publishers are opting to insert code in their websites to block AI tools from “crawling” them for content. But blocking Google is thorny, because publishers must allow their sites to be crawled in order to be indexed by its search engine—and therefore visible to users searching for their content.To some in the publishing world there was an implicit threat in Google’s policy: Let us train on your content or you’ll be hard to find on the internet.
Javier E

Regular Old Intelligence is Sufficient--Even Lovely - 0 views

  • Ezra Klein, has done some of the most dedicated reporting on the topic since he moved to the Bay Area a few years ago, talking with many of the people creating this new technology.
  • one is that the people building these systems have only a limited sense of what’s actually happening inside the black box—the bot is doing endless calculations instantaneously, but not in a way even their inventors can actually follow
  • an obvious question, one Klein has asked: “’If you think calamity so possible, why do this at all?
  • ...18 more annotations...
  • second, the people inventing them think they are potentially incredibly dangerous: ten percent of them, in fact, think they might extinguish the human species. They don’t know exactly how, but think Sorcerer’s Apprentice (or google ‘paper clip maximizer.’)
  • But why? The sun won’t blow up for a few billion years, meaning that if we don’t manage to drive ourselves to extinction, we’ve got all the time in the world. If it takes a generation or two for normal intelligence to come up with the structure of all the proteins, some people may die because a drug isn’t developed in time for their particular disease, but erring on the side of avoiding extinction seems mathematically sound
  • That is, it seems to me, a dumb answer from smart people—the answer not of people who have thought hard about ethics or even outcomes, but the answer that would be supplied by a kind of cultist.
  • (Probably the kind with stock options).
  • it does go, fairly neatly, with the default modern assumption that if we can do something we should do it, which is what I want to talk about. The question that I think very few have bothered to answer is, why?
  • One pundit after another explains that an AI program called Deep Mind worked far faster than scientists doing experiments to uncover the basic structure of all the different proteins, which will allow quicker drug development. It’s regarded as ipso facto better because it’s faster, and hence—implicitly—worth taking the risks that come with AI.
  • Allowing that we’re already good enough—indeed that our limitations are intrinsic to us, define us, and make us human—should guide us towards trying to shut down this technology before it does deep damage.
  • I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.”
  • As it happens, regular old intelligence has already give us most of what we need: engineers have cut the cost of solar power and windpower and the batteries to store the energy they produce so dramatically that they’re now the cheapest power on earth
  • We don’t actually need artificial intelligence in this case; we need natural compassion, so that we work with the necessary speed to deploy these technologies.
  • Beyond those, the cases become trivial, or worse
  • All of this is a way of saying something we don’t say as often as we should: humans are good enough. We don’t require improvement. We can solve the challenges we face, as humans.
  • It may take us longer than if we can employ some “new form of intelligence,” but slow and steady is the whole point of the race.
  • Unless, of course, you’re trying to make money, in which case “first-mover advantage” is the point
  • The other challenge that people cite, over and over again, to justify running the risks of AI is to “combat climate change,
  • here’s the thing: pausing, slowing down, stopping calls on the one human gift shared by no other creature, and perhaps by no machine. We are the animal that can, if we want to, decide not to do something we’re capable of doing.
  • n individual terms, that ability forms the core of our ethical and religious systems; in societal terms it’s been crucial as technology has developed over the last century. We’ve, so far, reined in nuclear and biological weapons, designer babies, and a few other maximally dangerous new inventions
  • It’s time to say do it again, and fast—faster than the next iteration of this tech.
‹ Previous 21 - 40 of 41 Next ›
Showing 20 items per page