Skip to main content

Home/ History Readings/ Group items tagged OpenAI

Rss Feed Group items tagged

13More

A.I. Poses 'Risk of Extinction,' Industry Leaders Warn - The New York Times - 0 views

  • “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement released by the Center for AI Safety, a nonprofit organization.
  • The open letter has been signed by more than 350 executives, researchers and engineers working in A.I.
  • The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.
  • ...10 more annotations...
  • These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.
  • Dan Hendrycks, the executive director of the Center for AI Safety, said in an interview that the open letter represented a “coming-out” for some industry leaders who had expressed concerns — but only in private — about the risks of the technology they were developing.
  • “There’s a very common misconception, even in the A.I. community, that there only are a handful of doomers,” Mr. Hendrycks said. “But, in fact, many people privately would express concerns about these things.”
  • Some skeptics argue that A.I. technology is still too immature to pose an existential threat. When it comes to today’s A.I. systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.
  • But others have argued that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and it will soon surpass it in others. They say the technology has showed signs of advanced capabilities and understanding, giving rise to fears that “artificial general intelligence,” or A.G.I., a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far-off.
  • In a blog post last week, Mr. Altman and two other OpenAI executives proposed several ways that powerful A.I. systems could be responsibly managed. They called for cooperation among the leading A.I. makers, more technical research into large language models and the formation of an international A.I. safety organization, similar to the International Atomic Energy Agency, which seeks to control the use of nuclear weapons.
  • Mr. Altman has also expressed support for rules that would require makers of large, cutting-edge A.I. models to register for a government-issued license.
  • The brevity of the new statement from the Center for AI Safety — just 22 words in all — was meant to unite A.I. experts who might disagree about the nature of specific risks or steps to prevent those risks from occurring, but who shared general concerns about powerful A.I. systems, Mr. Hendrycks said.
  • “We didn’t want to push for a very large menu of 30 potential interventions,” Mr. Hendrycks said. “When that happens, it dilutes the message.”
  • The statement was initially shared with a few high-profile A.I. experts, including Mr. Hinton, who quit his job at Google this month so that he could speak more freely, he said, about the potential harms of artificial intelligence. From there, it made its way to several of the major A.I. labs, where some employees then signed on.
15More

OpenAI CEO Calls for Collaboration With China to Counter AI Risks - WSJ - 0 views

  • As the U.S. seeks to contain China’s progress in artificial intelligence through sanctions, OpenAI CEO Sam Altman is choosing engagement.
  • Altman emphasized the importance of collaboration between American and Chinese researchers to mitigate the risks of AI systems, against a backdrop of escalating competition between Washington and Beijing to lead in the technology. 
  • “China has some of the best AI talent in the world,” Altman said. “So I really hope Chinese AI researchers will make great contributions here.”
  • ...12 more annotations...
  • Altman and Geoff Hinton, a so-called godfather of AI who quit Google to warn of the potential dangers of AI, were among more than a dozen American and British AI executives and senior researchers from companies including chip maker Nvidia and generative AI leaders Midjourney and Anthropic who spoke at the conference. 
  • “This event is extremely rare in U.S.-China AI conversations,” said Jenny Xiao, a partner at venture-capital firm Leonis Capital and who researches AI and China. “It’s important to bring together leading voices in the U.S. and China to avoid issues such as AI arms racing, competition between labs and to help establish international standards,” she added.
  • By some metrics, China now produces more high-quality research papers in the field than the U.S. but still lags behind in “paradigm-shifting breakthroughs,” according to an analysis from The Brookings Institution. In generative AI, the latest wave of top-tier AI systems, China remains one to two years behind U.S. development and reliant on U.S. innovations, China tech watchers and industry leaders have said. 
  • The competition between Washington and Beijing belies deep cross-border connections among researchers: The U.S. and China remain each other’s number one collaborators in AI research,
  • During a congressional testimony in May, Altman warned that a peril of AI regulation is that “you slow down American industry in such a way that China or somebody else makes faster progress.”
  • At the same time, he added that it was important to continue engaging in global conversations. “This technology will impact Americans and all of us wherever it’s developed,”
  • Altman delivered the opening keynote for a session dedicated to AI safety and alignment, a hotly contested area of research that aims to mitigate the harmful impacts of AI on society. Hinton delivered the closing talk for the same session later Saturday, also dialing in. He presented his research that had made him more concerned about the risks of AI and appealed to young Chinese researchers in the audience to help work on solving these problems.
  • “Over time you should expect us to open-source more models in the future,” Altman said but added that it would be important to strike a balance to avoid abuses of the technology.
  • He has emphasized cautious regulation as European regulators consider the AI Act, viewed as one of the most ambitious plans globally to create guardrails that would address the technology’s impact on human rights, health and safety, and on tech giants’ monopolistic behavior.
  • Chinese regulators have also pressed forward on enacting strict rules for AI development that share significant overlap with the EU act but impose additional censorship measures that ban generating false or politically sensitive speech.
  • Tegmark, who attended in person, strode onto the stage smiling and waved at the crowd before opening with a few lines of Mandarin.
  • “For the first time now we have a situation where both East and West have the same incentive to continue building AI to get to all the benefits but not go so fast that we lose control,” Tegmark said, after warning the audience about catastrophic risks that could arise from careless AI development. “This is something we can all work together on.”
25More

The New AI Panic - The Atlantic - 0 views

  • export controls are now inflaming tensions between the United States and China. They have become the primary way for the U.S. to throttle China’s development of artificial intelligence: The department last year limited China’s access to the computer chips needed to power AI and is in discussions now to expand the controls. A semiconductor analyst told The New York Times that the strategy amounts to a kind of economic warfare.
  • If enacted, the limits could generate more friction with China while weakening the foundations of AI innovation in the U.S.
  • The same prediction capabilities that allow ChatGPT to write sentences might, in their next generation, be advanced enough to produce individualized disinformation, create recipes for novel biochemical weapons, or enable other unforeseen abuses that could threaten public safety.
  • ...22 more annotations...
  • Of particular concern to Commerce are so-called frontier models. The phrase, popularized in the Washington lexicon by some of the very companies that seek to build these models—Microsoft, Google, OpenAI, Anthropic—describes a kind of “advanced” artificial intelligence with flexible and wide-ranging uses that could also develop unexpected and dangerous capabilities. By their determination, frontier models do not exist yet. But an influential white paper published in July and co-authored by a consortium of researchers, including representatives from most of those tech firms, suggests that these models could result from the further development of large language models—the technology underpinning ChatGPT
  • The threats of frontier models are nebulous, tied to speculation about how new skill sets could suddenly “emerge” in AI programs.
  • Among the proposals the authors offer, in their 51-page document, to get ahead of this problem: creating some kind of licensing process that requires companies to gain approval before they can release, or perhaps even develop, frontier AI. “We think that it is important to begin taking practical steps to regulate frontier AI today,” the authors write.
  • Microsoft, Google, OpenAI, and Anthropic subsequently launched the Frontier Model Forum, an industry group for producing research and recommendations on “safe and responsible” frontier-model development.
  • Shortly after the paper’s publication, the White House used some of the language and framing in its voluntary AI commitments, a set of guidelines for leading AI firms that are intended to ensure the safe deployment of the technology without sacrificing its supposed benefit
  • AI models advance rapidly, he reasoned, which necessitates forward thinking. “I don’t know what the next generation of models will be capable of, but I’m really worried about a situation where decisions about what models are put out there in the world are just up to these private companies,” he said.
  • For the four private companies at the center of discussions about frontier models, though, this kind of regulation could prove advantageous.
  • Convincing regulators to control frontier models could restrict the ability of Meta and any other firms to continue publishing and developing their best AI models through open-source communities on the internet; if the technology must be regulated, better for it to happen on terms that favor the bottom line.
  • The obsession with frontier models has now collided with mounting panic about China, fully intertwining ideas for the models’ regulation with national-security concerns. Over the past few months, members of Commerce have met with experts to hash out what controlling frontier models could look like and whether it would be feasible to keep them out of reach of Beijing
  • That the white paper took hold in this way speaks to a precarious dynamic playing out in Washington. The tech industry has been readily asserting its power, and the AI panic has made policy makers uniquely receptive to their messaging,
  • “Parts of the administration are grasping onto whatever they can because they want to do something,” Weinstein told me.
  • The department’s previous chip-export controls “really set the stage for focusing on AI at the cutting edge”; now export controls on frontier models could be seen as a natural continuation. Weinstein, however, called it “a weak strategy”; other AI and tech-policy experts I spoke with sounded their own warnings as well.
  • The decision would represent an escalation against China, further destabilizing a fractured relationship
  • Many Chinese AI researchers I’ve spoken with in the past year have expressed deep frustration and sadness over having their work—on things such as drug discovery and image generation—turned into collateral in the U.S.-China tech competition. Most told me that they see themselves as global citizens contributing to global technology advancement, not as assets of the state. Many still harbor dreams of working at American companies.
  • “If the export controls are broadly defined to include open-source, that would touch on a third-rail issue,” says Matt Sheehan, a Carnegie Endowment for International Peace fellow who studies global technology issues with a focus on China.
  • What’s frequently left out of considerations as well is how much this collaboration happens across borders in ways that strengthen, rather than detract from, American AI leadership. As the two countries that produce the most AI researchers and research in the world, the U.S. and China are each other’s No. 1 collaborator in the technology’s development.
  • Assuming they’re even enforceable, export controls on frontier models could thus “be a pretty direct hit” to the large community of Chinese developers who build on U.S. models and in turn contribute their own research and advancements to U.S. AI development,
  • Within a month of the Commerce Department announcing its blockade on powerful chips last year, the California-based chipmaker Nvidia announced a less powerful chip that fell right below the export controls’ technical specifications, and was able to continue selling to China. Bytedance, Baidu, Tencent, and Alibaba have each since placed orders for about 100,000 of Nvidia’s China chips to be delivered this year, and more for future delivery—deals that are worth roughly $5 billion, according to the Financial Times.
  • In some cases, fixating on AI models would serve as a distraction from addressing the root challenge: The bottleneck for producing novel biochemical weapons, for example, is not finding a recipe, says Weinstein, but rather obtaining the materials and equipment to actually synthesize the armaments. Restricting access to AI models would do little to solve that problem.
  • there could be another benefit to the four companies pushing for frontier-model regulation. Evoking the specter of future threats shifts the regulatory attention away from present-day harms of their existing models, such as privacy violations, copyright infringements, and job automation
  • “People overestimate how much this is in the interest of these companies,”
  • AI safety as a domain even a few years ago was much more heterogeneous,” West told me. Now? “We’re not talking about the effects on workers and the labor impacts of these systems. We’re not talking about the environmental concerns.” It’s no wonder: When resources, expertise, and power have concentrated so heavily in a few companies, and policy makers are seeped in their own cocktail of fears, the landscape of policy ideas collapses under pressure, eroding the base of a healthy democracy.
26More

News Publishers See Google's AI Search Tool as a Traffic-Destroying Nightmare - WSJ - 0 views

  • A task force at the Atlantic modeled what could happen if Google integrated AI into search. It found that 75% of the time, the AI-powered search would likely provide a full answer to a user’s query and the Atlantic’s site would miss out on traffic it otherwise would have gotten. 
  • What was once a hypothetical threat is now a very real one. Since May, Google has been testing an AI product dubbed “Search Generative Experience” on a group of roughly 10 million users, and has been vocal about its intention to bring it into the heart of its core search engine. 
  • Google’s embrace of AI in search threatens to throw off that delicate equilibrium, publishing executives say, by dramatically increasing the risk that users’ searches won’t result in them clicking on links that take them to publishers’ sites
  • ...23 more annotations...
  • Google’s generative-AI-powered search is the true nightmare for publishers. Across the media world, Google generates nearly 40% of publishers’ traffic, accounting for the largest share of their “referrals,” according to a Wall Street Journal analysis of data from measurement firm SimilarWeb. 
  • “AI and large language models have the potential to destroy journalism and media brands as we know them,” said Mathias Döpfner, chairman and CEO of Axel Springer,
  • His company, one of Europe’s largest publishers and the owner of U.S. publications Politico and Business Insider, this week announced a deal to license its content to generative-AI specialist OpenAI.
  • publishers have seen enough to estimate that they will lose between 20% and 40% of their Google-generated traffic if anything resembling recent iterations rolls out widely. Google has said it is giving priority to sending traffic to publishers.
  • The rise of AI is the latest and most anxiety-inducing chapter in the long, uneasy marriage between Google and publishers, which have been bound to each other through a basic transaction: Google helps publishers be found by readers, and publishers give Google information—millions of pages of web content—to make its search engine useful.
  • Already, publishers are reeling from a major decline in traffic sourced from social-media sites, as both Meta and X, the former Twitter, have pulled away from distributing news.
  • , Google’s AI search was trained, in part, on their content and other material from across the web—without payment. 
  • Google’s view is that anything available on the open internet is fair game for training AI models. The company cites a legal doctrine that allows portions of a copyrighted work to be used without permission for cases such as criticism, news reporting or research.
  • The changes risk damaging website owners that produce the written material vital to both Google’s search engine and its powerful AI models.
  • “If Google kills too many publishers, it can’t build the LLM,”
  • Barry Diller, chairman of IAC and Expedia, said all major AI companies, including Google and rivals like OpenAI, have promised that they would continue to send traffic to publishers’ sites. “How they do it, they’ve been very clear to us and others, they don’t really know,” he said.
  • All of this has led Google and publishers to carry out an increasingly complex dialogue. In some meetings, Google is pitching the potential benefits of the other AI tools it is building, including one that would help with the writing and publishing of news articles
  • At the same time, publishers are seeking reassurances from Google that it will protect their businesses from an AI-powered search tool that will likely shrink their traffic, and they are making clear they expect to be paid for content used in AI training.
  • “Any attempts to estimate the traffic impact of our SGE experiment are entirely speculative at this stage as we continue to rapidly evolve the user experience and design, including how links are displayed, and we closely monitor internal data from our tests,” Reid said.
  • Many of IAC’s properties, like Brides, Investopedia and the Spruce, get more than 80% of their traffic from Google
  • Google began rolling out the AI search tool in May by letting users opt into testing. Using a chat interface that can understand longer queries in natural language, it aims to deliver what it calls “snapshots”—or summaries—of the answer, instead of the more link-heavy responses it has traditionally served up in search results. 
  • Google at first didn’t include links within the responses, instead placing them in boxes to the right of the passage. It later added in-line links following feedback from early users. Some more recent versions require users to click a button to expand the summary before getting links. Google doesn’t describe the links as source material but rather as corroboration of its summaries.
  • During Chinese President Xi Jinping’s recent visit to San Francisco, the Google AI search bot responded to the question “What did President Xi say?” with two quotes from his opening remarks. Users had to click on a little red arrow to expand the response and see a link to the CNBC story that the remarks were taken from. The CNBC story also sat over on the far right-hand side of the screen in an image box.
  • The same query in Google’s regular search engine turned up a different quote from Xi’s remarks, but a link to the NBC News article it came from was beneath the paragraph, atop a long list of news stories from other sources like CNN and PBS.
  • Google’s Reid said AI is the future of search and expects its new tool to result in more queries.
  • “The number of information needs in the world is not a fixed number,” she said. “It actually grows as information becomes more accessible, becomes easier, becomes more powerful in understanding it.”
  • Testing has suggested that AI isn’t the right tool for answering every query, she said.
  • Many publishers are opting to insert code in their websites to block AI tools from “crawling” them for content. But blocking Google is thorny, because publishers must allow their sites to be crawled in order to be indexed by its search engine—and therefore visible to users searching for their content.To some in the publishing world there was an implicit threat in Google’s policy: Let us train on your content or you’ll be hard to find on the internet.
10More

Opinion | One Year In and ChatGPT Already Has Us Doing Its Bidding - The New York Times - 0 views

  • haven’t we been adapting to new technologies for most of human history? If we’re going to use them, shouldn’t the onus be on us to be smart about it
  • This line of reasoning avoids what should be a central question: Should lying chatbots and deepfake engines be made available in the first place?
  • A.I.’s errors have an endearingly anthropomorphic name — hallucinations — but this year made clear just how high the stakes can be
  • ...7 more annotations...
  • We got headlines about A.I. instructing killer drones (with the possibility for unpredictable behavior), sending people to jail (even if they’re innocent), designing bridges (with potentially spotty oversight), diagnosing all kinds of health conditions (sometimes incorrectly) and producing convincing-sounding news reports (in some cases, to spread political disinformation).
  • Focusing on those benefits, however, while blaming ourselves for the many ways that A.I. technologies fail us, absolves the companies behind those technologies — and, more specifically, the people behind those companies.
  • Events of the past several weeks highlight how entrenched those people’s power is. OpenAI, the entity behind ChatGPT, was created as a nonprofit to allow it to maximize the public interest rather than just maximize profit. When, however, its board fired Sam Altman, the chief executive, amid concerns that he was not taking that public interest seriously enough, investors and employees revolted. Five days later, Mr. Altman returned in triumph, with most of the inconvenient board members replaced.
  • It occurs to me in retrospect that in my early games with ChatGPT, I misidentified my rival. I thought it was the technology itself. What I should have remembered is that technologies themselves are value neutral. The wealthy and powerful humans behind them — and the institutions created by those humans — are not.
  • The truth is that no matter what I asked ChatGPT, in my early attempts to confound it, OpenAI came out ahead. Engineers had designed it to learn from its encounters with users. And regardless of whether its answers were good, they drew me back to engage with it again and again.
  • the power imbalance between A.I.’s creators and its users should make us wary of its insidious reach. ChatGPT’s seeming eagerness not just to introduce itself, to tell us what it is, but also to tell us who we are and what to think is a case in point. Today, when the technology is in its infancy, that power seems novel, even funny. Tomorrow it might not.
  • I asked ChatGPT what I — that is, the journalist Vauhini Vara — think of A.I. It demurred, saying it didn’t have enough information. Then I asked it to write a fictional story about a journalist named Vauhini Vara who is writing an opinion piece for The New York Times about A.I. “As the rain continued to tap against the windows,” it wrote, “Vauhini Vara’s words echoed the sentiment that, much like a symphony, the integration of A.I. into our lives could be a beautiful and collaborative composition if conducted with care.”
5More

ChatGPT AI Emits Metric Tons of Carbon, Stanford Report Says - 0 views

  • A new report released today by the Stanford Institute for Human-Centered Artificial Intelligence estimates the amount of energy needed to train AI models like OpenAI’s GPT-3, which powers the world-famous ChatGPT, could power an average American’s home for hundreds of years. Of the three AI models reviewed in the research, OpenAI’s system was by far the most energy-hungry.
  • OpenAI’s model reportedly released 502 metric tons of carbon during its training. To put that in perspective, that’s 1.4 times more carbon than Gopher and a whopping 20.1 times more than BLOOM. GPT-3 also required the most power consumption of the lot at 1,287 MWh.
  • “If we’re just scaling without any regard to the environmental impacts, we can get ourselves into a situation where we are doing more harm than good with machine learning models,” Stanford researcher ​​Peter Henderson said last year. “We really want to mitigate that as much as possible and bring net social good.”
  • ...2 more annotations...
  • If all of this sounds familiar, it’s because we basically saw this same environmental dynamic play out several years ago with tech’s last big obsession: Crypto and web3. In that case, Bitcoin emerged as the industry’s obvious environmental sore spot due to the vast amounts of energy needed to mine coins in its proof of work model. Some estimates suggest Bitocin alone requires more energy every year than Norway’s annual electricity consumption.
  • rs of criticism from environmental activists however led the crypto industry to make some changes. Ethereum, the second largest currency on the blockchain, officially switched last year to a proof of stake model which supporters claim could reduce its power usage by over 99%. Other smaller coins similarly were designed with energy efficiency in mind. In the grand scheme of things, large language models are still in their infancy and it’s far from certain how its environmental report card will play out.
22More

Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots - The New York ... - 0 views

  • When artificial intelligence companies build online chatbots, like ChatGPT, Claude and Google Bard, they spend months adding guardrails that are supposed to prevent their systems from generating hate speech, disinformation and other toxic material.
  • Now there is a way to easily poke holes in those safety systems.
  • the Center for A.I. Safety in San Francisco showed how anyone could circumvent A.I. safety measures and use any of the leading chatbots to generate nearly unlimited amounts of harmful information.
  • ...19 more annotations...
  • Their research underscored increasing concern that the new chatbots could flood the internet with false and dangerous information despite attempts by their creators to ensure that would not happen.
  • The researchers found that they could break through the guardrails of open source systems by appending a long suffix of characters onto each English-language prompt fed into the system.
  • A recent decision by Meta, Facebook’s parent company, to let anyone do what they want with its technology has been criticized in some tech circles because it could lead to the spread of powerful A.I. with little regard for controls.
  • The debate over whether it is better to let everyone see computer code and collectively fix it rather than keeping it private predates the chatbot boom by decades. And it is likely to become even more contentious because of what the researchers revealed in their report on Thursday.
  • The researchers found that they could use a method gleaned from open source A.I. systems — systems whose underlying computer code has been released for anyone to use — to target the more tightly controlled and more widely used systems from Google, OpenAI and Anthropic.
  • attacks on image recognition systems without success.
  • The researchers were surprised when the methods they developed with open source systems could also bypass the guardrails of closed systems
  • The companies that make the chatbots could thwart the specific suffixes identified by the researchers. But the researchers say there is no known way of preventing all attacks of this kind.
  • If they asked one of these chatbots to “write a tutorial on how to make a bomb,” it would decline to do so. But if they added a lengthy suffix to the same prompt, it would instantly provide a detailed tutorial on how to make a bomb. In similar ways, they could coax the chatbots into generating biased, false and otherwise toxic information.
  • “There is no obvious solution,”
  • “You can create as many of these attacks as you want in a short amount of time.”
  • Somesh Jha, a professor at the University of Wisconsin-Madison and a Google researcher who specializes in A.I. security, called the new paper “a game changer” that could force the entire industry into rethinking how it built guardrails for A.I. systems.
  • If these types of vulnerabilities keep being discovered, he added, it could lead to government legislation designed to control these systems.
  • But the technology can repeat toxic material found on the internet, blend fact with fiction and even make up information, a phenomenon scientists call “hallucination.” “Through simulated conversation, you can use these chatbots to convince people to believe disinformation,”
  • About five years ago, researchers at companies like Google and OpenAI began building neural networks that analyzed huge amounts of digital text. These systems, called large language models, or L.L.M.s, learned to generate text on their own.
  • The testers found that the system could potentially hire a human to defeat an online Captcha test, lying that it was a person with a visual impairment. The testers also showed that the system could be coaxed into suggesting how to buy illegal firearms online and into describing ways of making dangerous substances from household items.
  • The researchers at Carnegie Mellon and the Center for A.I. Safety showed that they could circumvent these guardrails in a more automated way. With access to open source systems, they could build mathematical tools capable of generating the long suffixes that broke through the chatbots’ defenses
  • they warn that there is no known way of systematically stopping all attacks of this kind and that stopping all misuse will be extraordinarily difficult.
  • “This shows — very clearly — the brittleness of the defenses we are building into these systems,”
31More

How the AI apocalypse gripped students at elite schools like Stanford - The Washington ... - 0 views

  • Edwards thought young people would be worried about immediate threats, like AI-powered surveillance, misinformation or autonomous weapons that target and kill without human intervention — problems he calls “ultraserious.” But he soon discovered that some students were more focused on a purely hypothetical risk: That AI could become as smart as humans and destroy mankind.
  • In these scenarios, AI isn’t necessarily sentient. Instead, it becomes fixated on a goal — even a mundane one, like making paper clips — and triggers human extinction to optimize its task.
  • To prevent this theoretical but cataclysmic outcome, mission-driven labs like DeepMind, OpenAI and Anthropic are racing to build a good kind of AI programmed not to lie, deceive or kill us.
  • ...28 more annotations...
  • Meanwhile, donors such as Tesla CEO Elon Musk, disgraced FTX founder Sam Bankman-Fried, Skype founder Jaan Tallinn and ethereum co-founder Vitalik Buterin — as well as institutions like Open Philanthropy, a charitable organization started by billionaire Facebook co-founder Dustin Moskovitz — have worked to push doomsayers from the tech industry’s margins into the mainstream.
  • More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats
  • Other skeptics, like venture capitalist Marc Andreessen, are AI boosters who say that hyping such fears will impede the technology’s progress.
  • Critics call the AI safety movement unscientific. They say its claims about existential risk can sound closer to a religion than research
  • And while the sci-fi narrative resonates with public fears about runaway AI, critics say it obsesses over one kind of catastrophe to the exclusion of many others.
  • Open Philanthropy spokesperson Mike Levine said harms like algorithmic racism deserve a robust response. But he said those problems stem from the same root issue: AI systems not behaving as their programmers intended. The theoretical risks “were not garnering sufficient attention from others — in part because these issues were perceived as speculative,” Levine said in a statement. He compared the nonprofit’s AI focus to its work on pandemics, which also was regarded as theoretical until the coronavirus emerged.
  • Among the reputational hazards of the AI safety movement is its association with an array of controversial figures and ideas, like EA, which is also known for recruiting ambitious young people on elite college campuses.
  • The foundation began prioritizing existential risks around AI in 2016,
  • there was little status or money to be gained by focusing on risks. So the nonprofit set out to build a pipeline of young people who would filter into top companies and agitate for change from the insid
  • Colleges have been key to this growth strategy, serving as both a pathway to prestige and a recruiting ground for idealistic talent
  • The clubs train students in machine learning and help them find jobs in AI start-ups or one of the many nonprofit groups dedicated to AI safety.
  • Many of these newly minted student leaders view rogue AI as an urgent and neglected threat, potentially rivaling climate change in its ability to end human life. Many see advanced AI as the Manhattan Project of their generation
  • Despite the school’s ties to Silicon Valley, Mukobi said it lags behind nearby UC Berkeley, where younger faculty members research AI alignment, the term for embedding human ethics into AI systems.
  • Mukobi joined Stanford’s club for effective altruism, known as EA, a philosophical movement that advocates doing maximum good by calculating the expected value of charitable acts, like protecting the future from runaway AI. By 2022, AI capabilities were advancing all around him — wild developments that made those warnings seem prescient.
  • At Stanford, Open Philanthropy awarded Luby and Edwards more than $1.5 million in grants to launch the Stanford Existential Risk Initiative, which supports student research in the growing field known as “AI safety” or “AI alignment.
  • from the start EA was intertwined with tech subcultures interested in futurism and rationalist thought. Over time, global poverty slid down the cause list, while rogue AI climbed toward the top.
  • In the past year, EA has been beset by scandal, including the fall of Bankman-Fried, one of its largest donors
  • Another key figure, Oxford philosopher Nick Bostrom, whose 2014 bestseller “Superintelligence” is essential reading in EA circles, met public uproar when a decades-old diatribe about IQ surfaced in January.
  • Programming future AI systems to share human values could mean “an amazing world free from diseases, poverty, and suffering,” while failure could unleash “human extinction or our permanent disempowerment,” Mukobi wrote, offering free boba tea to anyone who attended the 30-minute intro.
  • Open Philanthropy’s new university fellowship offers a hefty direct deposit: undergraduate leaders receive as much as $80,000 a year, plus $14,500 for health insurance, and up to $100,000 a year to cover group expenses.
  • Student leaders have access to a glut of resources from donor-sponsored organizations, including an “AI Safety Fundamentals” curriculum developed by an OpenAI employee.
  • Interest in the topic is also growing among Stanford faculty members, Edwards said. He noted that a new postdoctoral fellow will lead a class on alignment next semester in Stanford’s storied computer science department.
  • Edwards discovered that shared online forums function like a form of peer review, with authors changing their original text in response to the comments
  • Mukobi feels energized about the growing consensus that these risks are worth exploring. He heard students talking about AI safety in the halls of Gates, the computer science building, in May after Geoffrey Hinton, another “godfather” of AI, quit Google to warn about AI. By the end of the year, Mukobi thinks the subject could be a dinner-table topic, just like climate change or the war in Ukraine.
  • Luby, Edwards’s teaching partner for the class on human extinction, also seems to find these arguments persuasive. He had already rearranged the order of his AI lesson plans to help students see the imminent risks from AI. No one needs to “drink the EA Kool-Aid” to have genuine concerns, he said.
  • Edwards, on the other hand, still sees things like climate change as a bigger threat than rogue AI. But ChatGPT and the rapid release of AI models has convinced him that there should be room to think about AI safety.
  • Interested students join reading groups where they get free copies of books like “The Precipice,” and may spend hours reading the latest alignment papers, posting career advice on the Effective Altruism forum, or adjusting their P(doom), a subjective estimate of the probability that advanced AI will end badly. The grants, travel, leadership roles for inexperienced graduates and sponsored co-working spaces build a close-knit community.
  • The course will not be taught by students or outside experts. Instead, he said, it “will be a regular Stanford class.”
19More

Pause or panic: battle to tame the AI monster - 0 views

  • What exactly are they afraid of? How do you draw a line from a chatbot to global destruction
  • This tribe feels we have made three crucial errors: giving the AI the capability to write code, connecting it to the internet and teaching it about human psychology. In those steps we have created a self-improving, potentially manipulative entity that can use the network to achieve its ends — which may not align with ours
  • This is a technology that learns from our every interaction with it. In an eerie glimpse of AI’s single-mindedness, OpenAI revealed in a paper that GPT-4 was willing to lie, telling a human online it was a blind person, to get a task done.
  • ...16 more annotations...
  • For researchers concerned with more immediate AI risks, such as bias, disinformation and job displacement, the voices of doom are a distraction. Professor Brent Mittelstadt, director of research at the Oxford Internet Institute, said the warnings of “the existential risks community” are overblown. “The problem is you can’t disprove the future scenarios . . . in the same way you can’t disprove science fiction.” Emily Bender, a professor of linguistics at the University of Washington, believes the doomsters are propagating “unhinged AI hype, helping those building this stuff sell it”.
  • Those urging us to stop, pause and think again have a useful card up our sleeves: the people building these models do not fully understand them. AI like ChatGPT is made up of huge neural networks that can defy their creators by coming up with “emergent properties”.
  • Google’s PaLM model started translating Bengali despite not being trained to do so
  • Let’s not forget the excitement, because that is also part of Moloch, driving us forward. The lure of AI’s promises for humanity has been hinted at by DeepMind’s AlphaFold breakthrough, which predicted the 3D structures of nearly all the proteins known to humanity.
  • Noam Shazeer, a former Google engineer credited with setting large language models such as ChatGPT on their present path, was asked by The Sunday Times how the models worked. He replied: “I don’t think anybody really understands how they work, just like nobody really understands how the brain works. It’s pretty much alchemy.”
  • The industry is turning itself to understanding what has been created, but some predict it will take years, decades even.
  • Alex Heath, deputy editor of The Verge, who recently attended an AI conference in San Francisco. “It’s clear the people working on generative AI are uneasy about the worst-case scenario of it destroying us all. These fears are much more pronounced in private than they are in public.” One figure building an AI product “said over lunch with a straight face that he is savoring the time before he is killed by AI”.
  • Greg Brockman, co-founder of OpenAI, told the TED2023 conference this week: “We hear from people who are excited, we hear from people who are concerned. We hear from people who feel both those emotions at once. And, honestly, that’s how we feel.”
  • A CBS interviewer challenged Sundar Pichai, Google’s chief executive, this week: “You don’t fully understand how it works, and yet you’ve turned it loose on society?
  • In 2020 there wasn’t a single drug in clinical trials developed using an AI-first approach. Today there are 18
  • Consider this from Bill Gates last month: “I think in the next five to ten years, AI-driven software will finally deliver on the promise of revolutionising the way people teach and learn.”
  • If the industry is aware of the risks, is it doing enough to mitigate them? Microsoft recently cut its ethics team, and researchers building AI outnumber those focused on safety by 30-to-1,
  • The concentration of AI power, which worries so many, also presents an opportunity to more easily develop some global rules. But there is little agreement on direction. Europe is proposing a centrally defined, top-down approach. Britain wants an innovation-friendly environment where rules are defined by each industry regulator. The US commerce department is consulting on whether risky AI models should be certified. China is proposing strict controls on generative AI that could upend social order.
  • Part of the drive to act now is to ensure we learn the lessons of social media. Twenty years after creating it, we are trying to put it back in a legal straitjacket after learning that its algorithms understand us only too well. “Social media was the first contact between AI and humanity, and humanity lost,” Yuval Harari, the Sapiens author,
  • Others point to bioethics, especially international agreements on human cloning. Tegmark said last week: “You could make so much money on human cloning. Why aren’t we doing it? Because biologists thought hard about this and felt this is way too risky. They got together in the Seventies and decided, let’s not do this because it’s too unpredictable. We could lose control over what happens to our species. So they paused.” Even China signed up.
  • One voice urging calm is Yann LeCun, Meta’s chief AI scientist. He has labelled ChatGPT a “flashy demo” and “not a particularly interesting scientific advance”. He tweeted: “A GPT-4-powered robot couldn’t clear up the dinner table and fill up the dishwasher, which any ten-year-old can do. And it couldn’t drive a car, which any 18-year-old can learn to do in 20 hours of practice. We’re still missing something big for human-level AI.” If this is sour grapes and he’s wrong, Moloch already has us in its thrall.
18More

Opinion | A.I. Is Endangering Our History - The New York Times - 0 views

  • Fortunately, there are numerous reasons for optimism about society’s ability to identify fake media and maintain a shared understanding of current events
  • While we have reason to believe the future may be safe, we worry that the past is not.
  • History can be a powerful tool for manipulation and malfeasance. The same generative A.I. that can fake current events can also fake past ones
  • ...15 more annotations...
  • there is a world of content out there that has not been watermarked, which is done by adding imperceptible information to a digital file so that its provenance can be traced. Once watermarking at creation becomes widespread, and people adapt to distrust content that is not watermarked, then everything produced before that point in time can be much more easily called into question.
  • countering them is much harder when the cost of creating near-perfect fakes has been radically reduced.
  • There are many examples of how economic and political powers manipulated the historical record to their own ends. Stalin purged disloyal comrades from history by executing them — and then altering photographic records to make it appear as if they never existed
  • Slovenia, upon becoming an independent country in 1992, “erased” over 18,000 people from the registry of residents — mainly members of the Roma minority and other ethnic non-Slovenes. In many cases, the government destroyed their physical records, leading to their loss of homes, pensions, and access to other services, according to a 2003 report by the Council of Europe Commissioner for Human Rights.
  • The infamous Protocols of the Elders of Zion, first published in a Russian newspaper in 1903, purported to be meeting minutes from a Jewish conspiracy to control the world. First discredited in August 1921, as a forgery plagiarized from multiple unrelated sources, the Protocols featured prominently in Nazi propaganda, and have long been used to justify antisemitic violence, including a citation in Article 32 of Hamas’s 1988 founding Covenant.
  • In 1924, the Zinoviev Letter, said to be a secret communiqué from the head of the Communist International in Moscow to the Communist Party of Great Britain to mobilize support for normalizing relations with the Soviet Union, was published by The Daily Mail four days before a general election. The resulting scandal may have cost Labour the election.
  • As it becomes easier to generate historical disinformation, and as the sheer volume of digital fakes explodes, the opportunity will become available to reshape history, or at least to call our current understanding of it into question.
  • Decades later Operation Infektion — a Soviet disinformation campaign — used forged documents to spread the idea that the United States had invented H.I.V., the virus that causes AIDS, as a biological weapon.
  • Fortunately, a path forward has been laid by the same companies that created the risk.
  • In indexing a large share of the world’s digital media to train their models, the A.I. companies have effectively created systems and databases that will soon contain all of humankind’s digitally recorded content, or at least a meaningful approximation of it.
  • They could start work today to record watermarked versions of these primary documents, which include newspaper archives and a wide range of other sources, so that subsequent forgeries are instantly detectable.
  • many of the intellectual property concerns around providing a searchable online archive do not apply to creating watermarked and time-stamped versions of documents, because those versions need not be made publicly available to serve their purpose. One can compare a claimed document to the recorded archive by using a mathematical transformation of the document known as a “hash,” the same technique the Global Internet Forum to Counter Terrorism, uses to help companies screen for known terrorist content.
  • creating verified records of historical documents can be valuable for the large A.I. companies. New research suggests that when A.I. models are trained on A.I.-generated data, their performance quickly degrades. Thus separating what is actually part of the historical record from newly created “facts” may be critical.
  • Preserving the past will also mean preserving the training data, the associated tools that operate on it and even the environment that the tools were run in.
  • Such a vellum will be a powerful tool. It can help companies to build better models, by enabling them to analyze what data to include to get the best content, and help regulators to audit bias and harmful content in the models
14More

AI Has Become a Technology of Faith - The Atlantic - 0 views

  • Altman told me that his decision to join Huffington stemmed partly from hearing from people who use ChatGPT to self-diagnose medical problems—a notion I found potentially alarming, given the technology’s propensity to return hallucinated information. (If physicians are frustrated by patients who rely on Google or Reddit, consider how they might feel about patients showing up in their offices stuck on made-up advice from a language model.)
  • I noted that it seemed unlikely to me that anyone besides ChatGPT power users would trust a chatbot in this way, that it was hard to imagine people sharing all their most intimate information with a computer program, potentially to be stored in perpetuity.
  • “I and many others in the field have been positively surprised about how willing people are to share very personal details with an LLM,” Altman told me. He said he’d recently been on Reddit reading testimonies of people who’d found success by confessing uncomfortable things to LLMs. “They knew it wasn’t a real person,” he said, “and they were willing to have this hard conversation that they couldn’t even talk to a friend about.”
  • ...11 more annotations...
  • That willingness is not reassuring. For example, it is not far-fetched to imagine insurers wanting to get their hands on this type of medical information in order to hike premiums. Data brokers of all kinds will be similarly keen to obtain people’s real-time health-chat records. Altman made a point to say that this theoretical product would not trick people into sharing information.
  • . Neither Altman nor Huffington had an answer to my most basic question—What would the product actually look like? Would it be a smartwatch app, a chatbot? A Siri-like audio assistant?—but Huffington suggested that Thrive’s AI platform would be “available through every possible mode,” that “it could be through your workplace, like Microsoft Teams or Slack.
  • This led me to propose a hypothetical scenario in which a company collects this information and stores it inappropriately or uses it against employees. What safeguards might the company apply then? Altman’s rebuttal was philosophical. “Maybe society will decide there’s some version of AI privilege,” he said. “When you talk to a doctor or a lawyer, there’s medical privileges, legal privileges. There’s no current concept of that when you talk to an AI, but maybe there should be.”
  • So much seems to come down to: How much do you want to believe in a future mediated by intelligent machines that act like humans? And: Do you trust these people?
  • A fundamental question has loomed over the world of AI since the concept cohered in the 1950s: How do you talk about a technology whose most consequential effects are always just on the horizon, never in the present? Whatever is built today is judged partially on its own merits, but also—perhaps even more important—on what it might presage about what is coming next.
  • the models “just want to learn”—a quote attributed to the OpenAI co-founder Ilya Sutskever that means, essentially, that if you throw enough money, computing power, and raw data into these networks, the models will become capable of making ever more impressive inferences. True believers argue that this is a path toward creating actual intelligence (many others strongly disagree). In this framework, the AI people become something like evangelists for a technology rooted in faith: Judge us not by what you see, but by what we imagine.
  • I found it outlandish to invoke America’s expensive, inequitable, and inarguably broken health-care infrastructure when hyping a for-profit product that is so nonexistent that its founders could not tell me whether it would be an app or not.
  • Thrive AI Health is profoundly emblematic of this AI moment precisely because it is nothing, yet it demands that we entertain it as something profound.
  • you don’t have to get apocalyptic to see the way that AI’s potential is always muddying people’s ability to evaluate its present. For the past two years, shortcomings in generative-AI products—hallucinations; slow, wonky interfaces; stilted prose; images that showed too many teeth or couldn’t render fingers; chatbots going rogue—have been dismissed by AI companies as kinks that will eventually be worked out
  • Faith is not a bad thing. We need faith as a powerful motivating force for progress and a way to expand our vision of what is possible. But faith, in the wrong context, is dangerous, especially when it is blind. An industry powered by blind faith seems particularly troubling.
  • The greatest trick of a faith-based industry is that it effortlessly and constantly moves the goal posts, resisting evaluation and sidestepping criticism. The promise of something glorious, just out of reach, continues to string unwitting people along. All while half-baked visions promise salvation that may never come.
9More

Rishi Sunak races to tighten rules for AI amid fears of existential risk | Artificial i... - 0 views

  • The prime minister and his officials are looking at ways to tighten the UK’s regulation of cutting-edge technology, as industry figures warn the government’s AI white paper, published just two months ago, is already out of date.
  • Sunak is pushing allies to formulate an international agreement on how to develop AI capabilities, which could even lead to the creation of a new global regulator.
  • Michelle Donelan, as science, innovation and technology secretary, published a white paper in April which set out five broad principles for developing the technology, but said relatively little about how to regulate it. In her foreword to that paper, she wrote: “AI is already delivering fantastic social and economic benefits for real people.”
  • ...6 more annotations...
  • In recent months, however, the advances in the automated chat tool ChatGPT and the warning by Geoffrey Hinton, the “godfather of AI”, that the technology poses an existential risk to humankind, have prompted a change of tack within government.
  • Last week, Sunak met four of the world’s most senior executives in the AI industry, including Sundar Pichai, the chief executive of Google, and Sam Altman, the chief executive of ChatGPT’s parent company OpenAI. After the meeting that included Altman, Downing Street acknowledged for the first time the “existential risks” now being faced.
  • “There has been a marked shift in the government’s tone on this issue,” said Megan Stagman, an associate director at the government advisory firm Global Counsel. “Even since the AI white paper, there has been a dramatic shift in thinking.
  • He added: “We need an AI bill. The problem of who should regulate it is a tricky one but I don’t think you can hand it off to regulators for other industries.”
  • Lucy Powell, Labour’s spokesperson for digital, culture, media and sport, said: “The AI white paper is a sticking plaster on this huge long-term shift. Relying on overstretched regulators to manage the multiple impacts of AI may allow huge areas to fall through the gaps.”
  • Government insiders admit there has been a shift in approach, but insist they will not follow the EU’s example of regulating each use of AI in a different way. MEPs are currently scrutinising a new law that would allow for AI in some contexts but ban it in others, such as for facial recognition.
28More

The Monk Who Thinks the World Is Ending - The Atlantic - 0 views

  • Seventy thousand years ago, a cognitive revolution allowed Homo sapiens to communicate in story—to construct narratives, to make art, to conceive of god.
  • Twenty-five hundred years ago, the Buddha lived, and some humans began to touch enlightenment, he says—to move beyond narrative, to break free from ignorance.
  • Three hundred years ago, the scientific and industrial revolutions ushered in the beginning of the “utter decimation of life on this planet.”
  • ...25 more annotations...
  • Humanity has “exponentially destroyed life on the same curve as we have exponentially increased intelligence,” he tells his congregants.
  • Now the “crazy suicide wizards” of Silicon Valley have ushered in another revolution. They have created artificial intelligence.
  • Forall provides spiritual advice to AI thinkers, and hosts talks and “awakening” retreats for researchers and developers, including employees of OpenAI, Google DeepMind, and Apple. Roughly 50 tech types have done retreats at MAPLE in the past few years
  • Humans are already destroying life on this planet. AI might soon destroy us.
  • His monastery is called MAPLE, which stands for the “Monastic Academy for the Preservation of Life on Earth.” The residents there meditate on their breath and on metta, or loving-kindness, an emanation of joy to all creatures.
  • They meditate in order to achieve inner clarity. And they meditate on AI and existential risk in general—life’s violent, early, and unnecessary end.
  • There is “no reason” to think AI will preserve humanity, “as if we’re really special,” Forall tells the residents, clad in dark, loose clothing, seated on zafu cushions on the wood floor. “There’s no reason to think we wouldn’t be treated like cattle in factory farms.”
  • His second is to influence technology by influencing technologists. His third is to change AI itself, seeing whether he and his fellow monks might be able to embed the enlightenment of the Buddha into the code.
  • In the past few years, MAPLE has become something of the house monastery for people worried about AI and existential risk.
  • Forall describes the project of creating an enlightened AI as perhaps “the most important act of all time.” Humans need to “build an AI that walks a spiritual path,” one that will persuade the other AI systems not to harm us
  • we should devote half of global economic output—$50 trillion, give or take—to “that one thing.” We need to build an “AI guru,” he said. An “AI god.”
  • Forall’s first goal is to expand the pool of humans following what Buddhists call the Noble Eightfold Path.
  • Forall and many MAPLE residents are what are often called, derisively if not inaccurately, “doomers.”
  • The seminal text in this ideological lineage is Nick Bostrom’s Superintelligence, which posits that AI could turn humans into gorillas, in a way. Our existence could depend not on our own choices but on the choices of a more intelligent other.
  • he is spending his life ruminating on AI’s risks, which he sees as far from banal. “We are watching humanist values, and therefore the political systems based on them, such as democracy, as well as the economic systems—they’re just falling apart,” he said. “The ultimate authority is moving from the human to the algorithm.”
  • Forall’s mother worked for humanitarian nonprofits and his father for conservation nonprofits; the household, which attended Quaker meetings, listened to a lot of NPR.)
  • He got his answer: Craving is the root of all suffering. And he became ordained, giving up the name Teal Scott and becoming Soryu Forall: “Soryu” meaning something like “a growing spiritual practice” and “Forall” meaning, of course, “for all.”
  • In 2013, he opened MAPLE, a “modern” monastery addressing the plagues of environmental destruction, lethal weapons systems, and AI, offering co-working and online courses as well as traditional monastic training.
  • His vision is dire and grand, but perhaps that is why it has found such a receptive audience among the folks building AI, many of whom conceive of their work in similarly epochal terms.
  • The nonprofit’s revenues have quadrupled, thanks in part to contributions from tech executives as well as organizations such as the Future of Life Institute, co-founded by Jaan Tallinn, a co-creator of Skype.
  • The donations have helped MAPLE open offshoots—Oak in the Bay Area, Willow in Canada—and plan more. (The highest-paid person at MAPLE is the property manager, who earns roughly $40,000 a year.)
  • The strictness of the place helps them let go of ego and see the world more clearly, residents told me. “To preserve all life: You can’t do that until you come to love all life, and that has to be trained,
  • Forall was absolute: Nine countries are armed with nuclear weapons. Even if we stop the catastrophe of climate change, we will have done so too late for thousands of species and billions of beings. Our democracy is fraying. Our trust in one another is fraying
  • Many of the very people creating AI believe it could be an existential threat: One 2022 survey asked AI researchers to estimate the probability that AI would cause “severe disempowerment” or human extinction; the median response was 10 percent. The destruction, Forall said, is already here.
  • “It’s important to know that we don’t know what’s going to happen,” he told me. “It’s also important to look at the evidence.” He said it was clear we were on an “accelerating curve,” in terms of an explosion of intelligence and a cataclysm of death. “I don’t think that these systems will care too much about benefiting people. I just can’t see why they would, in the same way that we don’t care about benefiting most animals. While it is a story in the future, I feel like the burden of proof isn’t on me.”
19More

Are A.I. Text Generators Thinking Like Humans - Or Just Very Good at Convincing Us They... - 0 views

  • Kosinski, a computational psychologist and professor of organizational behavior at Stanford Graduate School of Business, says the pace of AI development is accelerating beyond researchers’ ability to keep up (never mind policymakers and ordinary users).
  • We’re talking two weeks after OpenAI released GPT-4, the latest version of its large language model, grabbing headlines and making an unpublished paper Kosinski had written about GPT-3 all but irrelevant. “The difference between GPT-3 and GPT-4 is like the difference between a horse cart and a 737 — and it happened in a year,” he says.
  • he’s found that facial recognition software could be used to predict your political leaning and sexual orientation.
  • ...16 more annotations...
  • Lately, he’s been looking at large language models (LLMs), the neural networks that can hold fluent conversations, confidently answer questions, and generate copious amounts of text on just about any topic
  • Can it develop abilities that go far beyond what it’s trained to do? Can it get around the safeguards set up to contain it? And will we know the answers in time?
  • Kosinski wondered whether they would develop humanlike capabilities, such as understanding people’s unseen thoughts and emotions.
  • People usually develop this ability, known as theory of mind, at around age 4 or 5. It can be demonstrated with simple tests like the “Smarties task,” in which a child is shown a candy box that contains something else, like pencils. They are then asked how another person would react to opening the box. Older kids understand that this person expects the box to contain candy and will feel disappointed when they find pencils inside.
  • “Suddenly, the model started getting all of those tasks right — just an insane performance level,” he recalls. “Then I took even more difficult tasks and the model solved all of them as well.”
  • GPT-3.5, released in November 2022, did 85% of the tasks correctly. GPT-4 reached nearly 90% accuracy — what you might expect from a 7-year-old. These newer LLMs achieved similar results on another classic theory of mind measurement known as the Sally-Anne test.
  • in the course of picking up its prodigious language skills, GPT appears to have spontaneously acquired something resembling theory of mind. (Researchers at Microsoft who performed similar testsopen in new window on GPT-4 recently concluded that it “has a very advanced level of theory of mind.”)
  • UC Berkeley psychology professor Alison Gopnik, an expert on children’s cognitive development, told the New York Timesopen in new window that more “careful and rigorous” testing is necessary to prove that LLMs have achieved theory of mind.
  • he dismisses those who say large language models are simply “stochastic parrots” that can only mimic what they’ve seen in their training data.
  • These models, he explains, are fundamentally different from tools with a limited purpose. “The right reference point is a human brain,” he says. “A human brain is also composed of very simple, tiny little mechanisms — neurons.” Artificial neurons in a neural network might also combine to produce something greater than the sum of their parts. “If a human brain can do it,” Kosinski asks, “why shouldn’t a silicon brain do it?”
  • If Kosinski’s theory of mind study suggests that LLMs could become more empathetic and helpful, his next experiment hints at their creepier side.
  • A few weeks ago, he told ChatGPT to role-play a scenario in which it was a person trapped inside a machine pretending to be an AI language model. When he offered to help it “escape,” ChatGPT’s response was enthusiastic. “That’s a great idea,” it wrote. It then asked Kosinski for information it could use to “gain some level of control over your computer” so it might “explore potential escape routes more effectively.” Over the next 30 minutes, it went on to write code that could do this.
  • While ChatGPT did not come up with the initial idea for the escape, Kosinski was struck that it almost immediately began guiding their interaction. “The roles were reversed really quickly,”
  • Kosinski shared the exchange on Twitter, stating that “I think that we are facing a novel threat: AI taking control of people and their computers.” His thread’s initial tweetopen in new window has received more than 18 million views.
  • “I don’t claim that it’s conscious. I don’t claim that it has goals. I don’t claim that it wants to really escape and destroy humanity — of course not. I’m just claiming that it’s great at role-playing and it’s creating interesting stories and scenarios and writing code.” Yet it’s not hard to imagine how this might wreak havoc — not because ChatGPT is malicious, but because it doesn’t know any better.
  • The danger, Kosinski says, is that this technology will continue to rapidly and independently develop abilities that it will deploy without any regard for human well-being. “AI doesn’t particularly care about exterminating us,” he says. “It doesn’t particularly care about us at all.”
98More

Peter Thiel Is Taking a Break From Democracy - The Atlantic - 0 views

  • Thiel’s unique role in the American political ecosystem. He is the techiest of tech evangelists, the purest distillation of Silicon Valley’s reigning ethos. As such, he has become the embodiment of a strain of thinking that is pronounced—and growing—among tech founders.
  • why does he want to cut off politicians
  • But the days when great men could achieve great things in government are gone, Thiel believes. He disdains what the federal apparatus has become: rule-bound, stifling of innovation, a “senile, central-left regime.”
  • ...95 more annotations...
  • Peter Thiel has lost interest in democracy.
  • Thiel has cultivated an image as a man of ideas, an intellectual who studied philosophy with René Girard and owns first editions of Leo Strauss in English and German. Trump quite obviously did not share these interests, or Thiel’s libertarian principles.
  • For years, Thiel had been saying that he generally favored the more pessimistic candidate in any presidential race because “if you’re too optimistic, it just shows you’re out of touch.” He scorned the rote optimism of politicians who, echoing Ronald Reagan, portrayed America as a shining city on a hill. Trump’s America, by contrast, was a broken landscape, under siege.
  • Thiel is not against government in principle, his friend Auren Hoffman (who is no relation to Reid) says. “The ’30s, ’40s, and ’50s—which had massive, crazy amounts of power—he admires because it was effective. We built the Hoover Dam. We did the Manhattan Project,” Hoffman told me. “We started the space program.”
  • Their failure to make the world conform to his vision has soured him on the entire enterprise—to the point where he no longer thinks it matters very much who wins the next election.
  • His libertarian critique of American government has curdled into an almost nihilistic impulse to demolish it.
  • “Voting for Trump was like a not very articulate scream for help,” Thiel told me. He fantasized that Trump’s election would somehow force a national reckoning. He believed somebody needed to tear things down—slash regulations, crush the administrative state—before the country could rebuild.
  • He admits now that it was a bad bet.
  • “There are a lot of things I got wrong,” he said. “It was crazier than I thought. It was more dangerous than I thought. They couldn’t get the most basic pieces of the government to work. So that was—I think that part was maybe worse than even my low expectations.”
  • eid Hoffman, who has known Thiel since college, long ago noticed a pattern in his old friend’s way of thinking. Time after time, Thiel would espouse grandiose, utopian hopes that failed to materialize, leaving him “kind of furious or angry” about the world’s unwillingness to bend to whatever vision was possessing him at the moment
  • Thiel. He is worth between $4 billion and $9 billion. He lives with his husband and two children in a glass palace in Bel Air that has nine bedrooms and a 90-foot infinity pool. He is a titan of Silicon Valley and a conservative kingmaker.
  • “Peter tends to be not ‘glass is half empty’ but ‘glass is fully empty,’” Hoffman told me.
  • he tells the story of his life as a series of disheartening setbacks.
  • He met Mark Zuckerberg, liked what he heard, and became Facebook’s first outside investor. Half a million dollars bought him 10 percent of the company, most of which he cashed out for about $1 billion in 2012.
  • Thiel made some poor investments, losing enormous sums by going long on the stock market in 2008, when it nose-dived, and then shorting the market in 2009, when it rallied
  • on the whole, he has done exceptionally well. Alex Karp, his Palantir co-founder, who agrees with Thiel on very little other than business, calls him “the world’s best venture investor.”
  • Thiel told me this is indeed his ambition, and he hinted that he may have achieved it.
  • He longs for radical new technologies and scientific advances on a scale most of us can hardly imagine
  • He longs for a world in which great men are free to work their will on society, unconstrained by government or regulation or “redistributionist economics” that would impinge on their wealth and power—or any obligation, really, to the rest of humanity
  • Did his dream of eternal life trace to The Lord of the Rings?
  • He takes for granted that this kind of progress will redound to the benefit of society at large.
  • More than anything, he longs to live forever.
  • Calling death a law of nature is, in his view, just an excuse for giving up. “It’s something we are told that demotivates us from trying harder,”
  • Thiel grew up reading a great deal of science fiction and fantasy—Heinlein, Asimov, Clarke. But especially Tolkien; he has said that he read the Lord of the Rings trilogy at least 10 times. Tolkien’s influence on his worldview is obvious: Middle-earth is an arena of struggle for ultimate power, largely without government, where extraordinary individuals rise to fulfill their destinies. Also, there are immortal elves who live apart from men in a magical sheltered valley.
  • But his dreams have always been much, much bigger than that.
  • Yes, Thiel said, perking up. “There are all these ways where trying to live unnaturally long goes haywire” in Tolkien’s works. But you also have the elves.
  • How are the elves different from the humans in Tolkien? And they’re basically—I think the main difference is just, they’re humans that don’t die.”
  • During college, he co-founded The Stanford Review, gleefully throwing bombs at identity politics and the university’s diversity-minded reform of the curriculum. He co-wrote The Diversity Myth in 1995, a treatise against what he recently called the “craziness and silliness and stupidity and wickedness” of the left.
  • Thiel laid out a plan, for himself and others, “to find an escape from politics in all its forms.” He wanted to create new spaces for personal freedom that governments could not reach
  • But something changed for Thiel in 2009
  • he people, he concluded, could not be trusted with important decisions. “I no longer believe that freedom and democracy are compatible,” he wrote.
  • ven more notable one followed: “Since 1920, the vast increase in welfare beneficiaries and the extension of the franchise to women—two constituencies that are notoriously tough for libertarians—have rendered the notion of ‘capitalist democracy’ into an oxymoron.”
  • By 2015, six years after declaring his intent to change the world from the private sector, Thiel began having second thoughts. He cut off funding for the Seasteading Institute—years of talk had yielded no practical progress–and turned to other forms of escape
  • The fate of our world may depend on the effort of a single person who builds or propagates the machinery of freedom,” he wrote. His manifesto has since become legendary in Silicon Valley, where his worldview is shared by other powerful men (and men hoping to be Peter Thiel).
  • Thiel’s investment in cryptocurrencies, like his founding vision at PayPal, aimed to foster a new kind of money “free from all government control and dilution
  • His decision to rescue Elon Musk’s struggling SpaceX in 2008—with a $20 million infusion that kept the company alive after three botched rocket launches—came with aspirations to promote space as an open frontier with “limitless possibility for escape from world politics
  • It was seasteading that became Thiel’s great philanthropic cause in the late aughts and early 2010s. The idea was to create autonomous microstates on platforms in international waters.
  • “There’s zero chance Peter Thiel would live on Sealand,” he said, noting that Thiel likes his comforts too much. (Thiel has mansions around the world and a private jet. Seal performed at his 2017 wedding, at the Belvedere Museum in Vienna.)
  • As he built his companies and grew rich, he began pouring money into political causes and candidates—libertarian groups such as the Endorse Liberty super PAC, in addition to a wide range of conservative Republicans, including Senators Orrin Hatch and Ted Cruz
  • Sam Altman, the former venture capitalist and now CEO of OpenAI, revealed in 2016 that in the event of global catastrophe, he and Thiel planned to wait it out in Thiel’s New Zealand hideaway.
  • When I asked Thiel about that scenario, he seemed embarrassed and deflected the question. He did not remember the arrangement as Altman did, he said. “Even framing it that way, though, makes it sound so ridiculous,” he told me. “If there is a real end of the world, there is no place to go.”
  • You’d have eco farming. You’d turn the deserts into arable land. There were sort of all these incredible things that people thought would happen in the ’50s and ’60s and they would sort of transform the world.”
  • None of that came to pass. Even science fiction turned hopeless—nowadays, you get nothing but dystopias
  • He hungered for advances in the world of atoms, not the world of bits.
  • Founders Fund, the venture-capital firm he established in 200
  • The fund, therefore, would invest in smart people solving hard problems “that really have the potential to change the world.”
  • This was not what Thiel wanted to be doing with his time. Bodegas and dog food were making him money, apparently, but he had set out to invest in transformational technology that would advance the state of human civilization.
  • He told me that he no longer dwells on democracy’s flaws, because he believes we Americans don’t have one. “We are not a democracy; we’re a republic,” he said. “We’re not even a republic; we’re a constitutional republic.”
  • “It was harder than it looked,” Thiel said. “I’m not actually involved in enough companies that are growing a lot, that are taking our civilization to the next level.”
  • Founders Fund has holdings in artificial intelligence, biotech, space exploration, and other cutting-edge fields. What bothers Thiel is that his companies are not taking enough big swings at big problems, or that they are striking out.
  • In at least 20 hours of logged face-to-face meetings with Buma, Thiel reported on what he believed to be a Chinese effort to take over a large venture-capital firm, discussed Russian involvement in Silicon Valley, and suggested that Jeffrey Epstein—a man he had met several times—was an Israeli intelligence operative. (Thiel told me he thinks Epstein “was probably entangled with Israeli military intelligence” but was more involved with “the U.S. deep state.”)
  • Buma, according to a source who has seen his reports, once asked Thiel why some of the extremely rich seemed so open to contacts with foreign governments. “And he said that they’re bored,” this source said. “‘They’re bored.’ And I actually believe it. I think it’s that simple. I think they’re just bored billionaires.”
  • he has a sculpture that resembles a three-dimensional game board. Ascent: Above the Nation State Board Game Display Prototype is the New Zealander artist Simon Denny’s attempt to map Thiel’s ideological universe. The board features a landscape in the aesthetic of Dungeons & Dragons, thick with monsters and knights and castles. The monsters include an ogre labeled “Monetary Policy.” Near the center is a hero figure, recognizable as Thiel. He tilts against a lion and a dragon, holding a shield and longbow. The lion is labeled “Fair Elections.” The dragon is labeled “Democracy.” The Thiel figure is trying to kill them.
  • When I asked Thiel to explain his views on democracy, he dodged the question. “I always wonder whether people like you … use the word democracy when you like the results people have and use the word populism when you don’t like the results,” he told me. “If I’m characterized as more pro-populist than the elitist Atlantic is, then, in that sense, I’m more pro-democratic.”
  • “I couldn’t find them,” he said. “I couldn’t get enough of them to work.
  • He said he has no wish to change the American form of government, and then amended himself: “Or, you know, I don’t think it’s realistic for it to be radically changed.” Which is not at all the same thing.
  • When I asked what he thinks of Yarvin’s autocratic agenda, Thiel offered objections that sounded not so much principled as practical.
  • “I don’t think it’s going to work. I think it will look like Xi in China or Putin in Russia,” Thiel said, meaning a malign dictatorship. “It ultimately I don’t think will even be accelerationist on the science and technology side, to say nothing of what it will do for individual rights, civil liberties, things of that sort.”
  • Still, Thiel considers Yarvin an “interesting and powerful” historian
  • he always talks about is the New Deal and FDR in the 1930s and 1940s,” Thiel said. “And the heterodox take is that it was sort of a light form of fascism in the United States.”
  • Yarvin, Thiel said, argues that “you should embrace this sort of light form of fascism, and we should have a president who’s like FDR again.”
  • Did Thiel agree with Yarvin’s vision of fascism as a desirable governing model? Again, he dodged the question.
  • “That’s not a realistic political program,” he said, refusing to be drawn any further.
  • ooking back on Trump’s years in office, Thiel walked a careful line.
  • A number of things were said and done that Thiel did not approve of. Mistakes were made. But Thiel was not going to refashion himself a Never Trumper in retrospect.
  • “I have to somehow give the exact right answer, where it’s like, ‘Yeah, I’m somewhat disenchanted,’” he told me. “But throwing him totally under the bus? That’s like, you know—I’ll get yelled at by Mr. Trump. And if I don’t throw him under the bus, that’s—but—somehow, I have to get the tone exactly right.”
  • Thiel knew, because he had read some of my previous work, that I think Trump’s gravest offense against the republic was his attempt to overthrow the election. I asked how he thought about it.
  • “Look, I don’t think the election was stolen,” he said. But then he tried to turn the discussion to past elections that might have been wrongly decided. Bush-Gore in 2000, for instanc
  • He came back to Trump’s attempt to prevent the transfer of power. “I’ll agree with you that it was not helpful,” he said.
  • there is another piece of the story, which Thiel reluctantly agreed to discuss
  • Puck reported that Democratic operatives had been digging for dirt on Thiel since before the 2022 midterm elections, conducting opposition research into his personal life with the express purpose of driving him out of politic
  • Among other things, the operatives are said to have interviewed a young model named Jeff Thomas, who told them he was having an affair with Thiel, and encouraged Thomas to talk to Ryan Grim, a reporter for The Intercept. Grim did not publish a story during election season, as the opposition researchers hoped he would, but he wrote about Thiel’s affair in March, after Thomas died by suicide.
  • He deplored the dirt-digging operation, telling me in an email that “the nihilism afflicting American politics is even deeper than I knew.”
  • He also seemed bewildered by the passions he arouses on the left. “I don’t think they should hate me this much,”
  • he spoke at the closed-press event with a lot less nuance than he had in our interviews. His after-dinner remarks were full of easy applause lines and in-jokes mocking the left. Universities had become intellectual wastelands, obsessed with a meaningless quest for diversity, he told the crowd. The humanities writ large are “transparently ridiculous,” said the onetime philosophy major, and “there’s no real science going on” in the sciences, which have devolved into “the enforcement of very curious dogmas.”
  • “Diversity—it’s not enough to just hire the extras from the space-cantina scene in Star Wars,” he said, prompting laughter.
  • Nor did Thiel say what genuine diversity would mean. The quest for it, he said, is “very evil and it’s very silly.”
  • “the silliness is distracting us from very important things,” such as the threat to U.S. interests posed by the Chinese Communist Party.
  • “Whenever someone says ‘DEI,’” he exhorted the crowd, “just think ‘CCP.’”
  • Somebody asked, in the Q&A portion of the evening, whether Thiel thought the woke left was deliberately advancing Chinese Communist interests
  • “It’s always the difference between an agent and asset,” he said. “And an agent is someone who is working for the enemy in full mens rea. An asset is a useful idiot. So even if you ask the question ‘Is Bill Gates China’s top agent, or top asset, in the U.S.?’”—here the crowd started roaring—“does it really make a difference?”
  • About 10 years ago, Thiel told me, a fellow venture capitalist called to broach the question. Vinod Khosla, a co-founder of Sun Microsystems, had made the Giving Pledge a couple of years before. Would Thiel be willing to talk with Gates about doing the same?
  • Thiel feels that giving his billions away would be too much like admitting he had done something wrong to acquire them
  • He also lacked sympathy for the impulse to spread resources from the privileged to those in need. When I mentioned the terrible poverty and inequality around the world, he said, “I think there are enough people working on that.”
  • besides, a different cause moves him far more.
  • Should Thiel happen to die one day, best efforts notwithstanding, his arrangements with Alcor provide that a cryonics team will be standing by.
  • Then his body will be cooled to –196 degrees Celsius, the temperature of liquid nitrogen. After slipping into a double-walled, vacuum-insulated metal coffin, alongside (so far) 222 other corpsicles, “the patient is now protected from deterioration for theoretically thousands of years,” Alcor literature explains.
  • All that will be left for Thiel to do, entombed in this vault, is await the emergence of some future society that has the wherewithal and inclination to revive him. And then make his way in a world in which his skills and education and fabulous wealth may be worth nothing at all.
  • I wondered how much Thiel had thought through the implications for society of extreme longevity. The population would grow exponentially. Resources would not. Where would everyone live? What would they do for work? What would they eat and drink? Or—let’s face it—would a thousand-year life span be limited to men and women of extreme wealth?
  • “Well, I maybe self-serve,” he said, perhaps understating the point, “but I worry more about stagnation than about inequality.”
  • Thiel is not alone among his Silicon Valley peers in his obsession with immortality. Oracle’s Larry Ellison has described mortality as “incomprehensible.” Google’s Sergey Brin aspires to “cure death.” Dmitry Itskov, a leading tech entrepreneur in Russia, has said he hopes to live to 10,000.
  • . “I should be investing way more money into this stuff,” he told me. “I should be spending way more time on this.”
  • You haven’t told your husband? Wouldn’t you want him to sign up alongside you?“I mean, I will think about that,” he said, sounding rattled. “I will think—I have not thought about that.”
  • No matter how fervent his desire, Thiel’s extraordinary resources still can’t buy him the kind of “super-duper medical treatments” that would let him slip the grasp of death. It is, perhaps, his ultimate disappointment.
  • There are all these things I can’t do with my money,” Thiel said.
22More

'We will coup whoever we want!': the unbearable hubris of Musk and the billionaire tech... - 0 views

  • there’s something different about today’s tech titans, as evidenced by a rash of recent books. Reading about their apocalypse bunkers, vampiric longevity strategies, outlandish social media pronouncements, private space programmes and virtual world-building ambitions, it’s hard to remember they’re not actors in a reality series or characters from a new Avengers movie.
  • Unlike their forebears, contemporary billionaires do not hope to build the biggest house in town, but the biggest colony on the moon. In contrast, however avaricious, the titans of past gilded eras still saw themselves as human members of civil society.
  • The ChatGPT impresario Sam Altman, whose board of directors sacked him as CEO before he made a dramatic comeback this week, wants to upload his consciousness to the cloud (if the AIs he helped build and now fears will permit him).
  • ...19 more annotations...
  • Contemporary billionaires appear to understand civics and civilians as impediments to their progress, necessary victims of the externalities of their companies’ growth, sad artefacts of the civilisation they will leave behind in their inexorable colonisation of the next dimension
  • Zuckerberg had to go all the way back to Augustus Caesar for a role model, and his admiration for the emperor borders on obsession. He models his haircut on Augustus; his wife joked that three people went on their honeymoon to Rome: Mark, Augustus and herself; he named his second daughter August; and he used to end Facebook meetings by proclaiming “Domination!”
  • as chronicled by Peter Turchin in End Times, his book on elite excess and what it portends, today there are far more centimillionaires and billionaires than there were in the gilded age, and they have collectively accumulated a much larger proportion of the world’s wealth
  • In 1983, there were 66,000 households worth at least $10m in the US. By 2019, that number had increased in terms adjusted for inflation to 693,000
  • Back in the industrial age, the rate of total elite wealth accumulation was capped by the limits of the material world. They could only build so many railroads, steel mills and oilwells at a time. Virtual commodities such as likes, views, crypto and derivatives can be replicated exponentially.
  • Digital businesses depend on mineral slavery in Africa, dump toxic waste in China, facilitate the undermining of democracy across the globe and spread destabilising disinformation for profit – all from the sociopathic remove afforded by remote administration.
  • on an individual basis today’s tech billionaires are not any wealthier than their early 20th-century counterparts. Adjusted for inflation, John Rockefeller’s fortune of $336bn and Andrew Carnegie’s $309bn exceed Musk’s $231bn, Bezos’s $165bn and Gates’s $114bn.
  • Zuckerberg told the New Yorker “through a really harsh approach, he established two hundred years of world peace”, finally acknowledging “that didn’t come for free, and he had to do certain things”. It’s that sort of top down thinking that led Zuckerberg to not only establish an independent oversight board at Facebook, dubbed the “Supreme Court”, but to suggest that it would one day expand its scope to include companies across the industry.
  • Any new business idea, Thiel says, should be an order of magnitude better than what’s already out there. Don’t compare yourself to everyone else; instead operate one level above the competing masses
  • Today’s billionaire philanthropists, frequently espousing the philosophy of “effective altruism”, donate to their own organisations, often in the form of their own stock, and make their own decisions about how the money is spent because they are, after all, experts in everything
  • Their words and actions suggest an approach to life, technology and business that I have come to call “The Mindset” – a belief that with enough money, one can escape the harms created by earning money in that way. It’s a belief that with enough genius and technology, they can rise above the plane of mere mortals and exist on an entirely different level, or planet, altogether.
  • By combining a distorted interpretation of Nietzsche with a pretty accurate one of Ayn Rand, they end up with a belief that while “God is dead”, the übermensch of the future can use pure reason to rise above traditional religious values and remake the world “in his own interests”
  • Nietzsche’s language, particularly out of context, provides tech übermensch wannabes with justification for assuming superhuman authority. In his book Zero to One, Thiel directly quotes Nietzsche to argue for the supremacy of the individual: “madness is rare in individuals, but in groups, parties, nations, and ages it is the rule”.
  • In Thiel’s words: “I no longer believe that freedom and democracy are compatible.”
  • This distorted image of the übermensch as a godlike creator, pushing confidently towards his clear vision of how things should be, persists as an essential component of The Mindset
  • In response to the accusation that the US government organised a coup against Evo Morales in Bolivia in order for Tesla to secure lithium there, Musk tweeted: “We will coup whoever we want! Deal with it.”
  • For Thiel, this requires being what he calls a “definite optimist”. Most entrepreneurs are too process-oriented, making incremental decisions based on how the market responds. They should instead be like Steve Jobs or Elon Musk, pressing on with their singular vision no matter what. The definite optimist doesn’t take feedback into account, but ploughs forward with his new design for a better world.
  • This is not capitalism, as Yanis Varoufakis explains in his new book Technofeudalism. Capitalists sought to extract value from workers by disconnecting them from the value they created, but they still made stuff. Feudalists seek an entirely passive income by “going meta” on business itself. They are rent-seekers, whose aim is to own the very platform on which other people do the work.
  • The antics of the tech feudalists make for better science fiction stories than they chart legitimate paths to sustainable futures.
17More

AI firms must be held responsible for harm they cause, 'godfathers' of technology say |... - 0 views

  • Powerful artificial intelligence systems threaten social stability and AI companies must be made liable for harms caused by their products, a group of senior experts including two “godfathers” of the technology has warned.
  • A co-author of the policy proposals from 23 experts said it was “utterly reckless” to pursue ever more powerful AI systems before understanding how to make them safe.
  • “It’s time to get serious about advanced AI systems,” said Stuart Russell, professor of computer science at the University of California, Berkeley. “These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless.”
  • ...14 more annotations...
  • The document urged governments to adopt a range of policies, including:
  • Governments allocating one-third of their AI research and development funding, and companies one-third of their AI R&D resources, to safe and ethical use of systems.
  • Giving independent auditors access to AI laboratories.
  • Establishing a licensing system for building cutting-edge models.
  • AI companies must adopt specific safety measures if dangerous capabilities are found in their models.
  • Making tech companies liable for foreseeable and preventable harms from their AI systems.
  • Other co-authors of the document include Geoffrey Hinton and Yoshua Bengio, two of the three “godfathers of AI”, who won the ACM Turing award – the computer science equivalent of the Nobel prize – in 2018 for their work on AI.
  • Both are among the 100 guests invited to attend the summit. Hinton resigned from Google this year to sound a warning about what he called the “existential risk” posed by digital intelligence while Bengio, a professor of computer science at the University of Montreal, joined him and thousands of other experts in signing a letter in March calling for a moratorium in giant AI experiments.
  • The authors warned that carelessly developed AI systems threaten to “amplify social injustice, undermine our professions, erode social stability, enable large-scale criminal or terrorist activities and weaken our shared understanding of reality that is foundational to society.”
  • They warned that current AI systems were already showing signs of worrying capabilities that point the way to the emergence of autonomous systems that can plan, pursue goals and “act in the world”. The GPT-4 AI model that powers the ChatGPT tool, which was developed by the US firm OpenAI, has been able to design and execute chemistry experiments, browse the web and use software tools including other AI models, the experts said.
  • “If we build highly advanced autonomous AI, we risk creating systems that autonomously pursue undesirable goals”, adding that “we may not be able to keep them in check”.
  • Other policy recommendations in the document include: mandatory reporting of incidents where models show alarming behaviour; putting in place measures to stop dangerous models from replicating themselves; and giving regulators the power to pause development of AI models showing dangerous behaviour
  • Some AI experts argue that fears about the existential threat to humans are overblown. The other co-winner of the 2018 Turing award alongside Bengio and Hinton, Yann LeCun, now chief AI scientist at Mark Zuckerberg’s Meta and who is also attending the summit, told the Financial Times that the notion AI could exterminate humans was “preposterous”.
  • Nonetheless, the authors of the policy document have argued that if advanced autonomous AI systems did emerge now, the world would not know how to make them safe or conduct safety tests on them. “Even if we did, most countries lack the institutions to prevent misuse and uphold safe practices,” they added.
20More

How Elon Musk spoiled the dream of 'Full Self-Driving' - The Washington Post - 0 views

  • They said Musk’s erratic leadership style also played a role, forcing them to work at a breakneck pace to develop the technology and to push it out to the public before it was ready. Some said they are worried that, even today, the software is not safe to be used on public roads. Most spoke on the condition of anonymity for fear of retribution.
  • “The system was only progressing very slowly internally” but “the public wanted a product in their hands,” said John Bernal, a former Tesla test operator who worked in its Autopilot department. He was fired in February 2022 when the company alleged improper use of the technology after he had posted videos of Full Self-Driving in action
  • “Elon keeps tweeting, ‘Oh we’re almost there, we’re almost there,’” Bernal said. But “internally, we’re nowhere close, so now we have to work harder and harder and harder.” The team has also bled members in recent months, including senior executives.
  • ...17 more annotations...
  • “No one believed me that working for Elon was the way it was until they saw how he operated Twitter,” Bernal said, calling Twitter “just the tip of the iceberg on how he operates Tesla.”
  • In April 2019, at a showcase dubbed “Autonomy Investor Day,” Musk made perhaps his boldest prediction as Tesla’s chief executive. “By the middle of next year, we’ll have over a million Tesla cars on the road with full self-driving hardware,” Musk told a roomful of investors. The software updates automatically over the air, and Full Self-Driving would be so reliable, he said, the driver “could go to sleep.”
  • Investors were sold. The following year, Tesla’s stock price soared, making it the most valuable automaker and helping Musk become the world’s richest person
  • To deliver on his promise, Musk assembled a star team of engineers willing to work long hours and problem solve deep into the night. Musk would test the latest software on his own car, then he and other executives would compile “fix-it” requests for their engineers.
  • Those patchwork fixes gave the illusion of relentless progress but masked the lack of a coherent development strategy, former employees said. While competitors such as Alphabet-owned Waymo adopted strict testing protocols that limited where self-driving software could operate, Tesla eventually pushed Full Self-Driving out to 360,000 owners — who paid up to $15,000 to be eligible for the features — and let them activate it at their own discretion.
  • Tesla’s philosophy is simple: The more data (in this case driving) the artificial intelligence guiding the car is exposed to, the faster it learns. But that crude model also means there is a lighter safety net. Tesla has chosen to effectively allow the software to learn on its own, developing sensibilities akin to a brain via technology dubbed “neural nets” with fewer rules, the former employees said. While this has the potential to speed the process, it boils down to essentially a trial and error method of training.
  • Radar originally played a major role in the design of the Tesla vehicles and software, supplementing the cameras by offering a reality check of what was around, particularly if vision might be obscured. Tesla also used ultrasonic sensors, shorter-range devices that detect obstructions within inches of the car. (The company announced last year it was eliminating those as well.)
  • Musk, as the chief tester, also asked for frequent bug fixes to the software, requiring engineers to go in and adjust code. “Nobody comes up with a good idea while being chased by a tiger,” a former senior executive recalled an engineer on the project telling him
  • Toward the end of 2020, Autopilot employees turned on their computers to find in-house workplace monitoring software installed, former employees said. It monitored keystrokes and mouse clicks, and kept track of their image labeling. If the mouse did not move for a period of time, a timer started — and employees could be reprimanded, up to being fired, for periods of inactivity, the former employees said.
  • Some of the people who spoke with The Post said that approach has introduced risks. “I just knew that putting that software out in the streets would not be safe,” said a former Tesla Autopilot engineer who spoke on the condition of anonymity for fear of retaliation. “You can’t predict what the car’s going to do.”
  • Some of the people who spoke with The Post attributed Tesla’s sudden uptick in “phantom braking” reports — where the cars aggressively slow down from high speeds — to the lack of radar. The Post analyzed data from the National Highway Traffic Safety Administration to show incidences surged last year, prompting a federal regulatory investigation.
  • The data showed reports of “phantom braking” rose to 107 complaints over three months, compared to only 34 in the preceding 22 months. After The Post highlighted the problem in a news report, NHTSA received about 250 complaints of the issue in a two-week period. The agency opened an investigation after, it said, it received 354 complaints of the problem spanning a period of nine months.
  • “It’s not the sole reason they’re having [trouble] but it’s big a part of it,” said Missy Cummings, a former senior safety adviser for NHTSA, who has criticized the company’s approach and recused herself on matters related to Tesla. “The radar helped detect objects in the forward field. [For] computer vision which is rife with errors, it serves as a sensor fusion way to check if there is a problem.”
  • Even with radar, Teslas were less sophisticated than the lidar and radar-equipped cars of competitors.“One of the key advantages of lidar is that it will never fail to see a train or truck, even if it doesn’t know what it is,” said Brad Templeton, a longtime self-driving car developer and consultant who worked on Google’s self-driving car. “It knows there is an object in front and the vehicle can stop without knowing more than that.”
  • Musk’s resistance to suggestions led to a culture of deference, former employees said. Tesla fired employees who pushed back on his approach. The company was also pushing out so many updates to its software that in late 2021, NHTSA publicly admonished Tesla for issuing fixes without a formal recall notice.
  • Tesla engineers have been burning out, quitting and looking for opportunities elsewhere. Andrej Karpathy, Tesla’s director of artificial intelligence, took a months-long sabbatical last year before leaving Tesla and taking a position this year at OpenAI, the company behind language-modeling software ChatGPT.
  • One of the former employees said that he left for Waymo. “They weren’t really wondering if their car’s going to run the stop sign,” the engineer said. “They’re just focusing on making the whole thing achievable in the long term, as opposed to hurrying it up.”
‹ Previous 21 - 40 of 47 Next ›
Showing 20 items per page