Skip to main content

Home/ History Readings/ Group items tagged artificial intelligence

Rss Feed Group items tagged

Javier E

'Humanity's remaining timeline? It looks more like five years than 50': meet the neo-lu... - 0 views

  • A few weeks back, in January, the largest-ever survey of AI researchers found that 16% of them believed their work would lead to the extinction of humankind.
  • “That’s a one-in-six chance of catastrophe,” says Alistair Stewart, a former British soldier turned master’s student. “That’s Russian-roulette odds.”
  • What would the others have us do? Stewart, the soldier turned grad student, wants a moratorium on the development of AIs until we understand them better – until those Russian-roulette-like odds improve. Yudkowsky would have us freeze everything today, this instant. “You could say that nobody’s allowed to train something more powerful than GPT-4,” he suggests. “Humanity could decide not to die and it would not be that hard.”
Javier E

Mistral, the 9-Month-Old AI Startup Challenging Silicon Valley's Giants - WSJ - 0 views

  • Mensch, who started in academia, has spent much of his life figuring out how to make AI and machine-learning systems more efficient. Early last year, he joined forces with co-founders Timothée Lacroix, 32, and Guillaume Lample, 33, who were then at Meta Platforms’ artificial-intelligence lab in Paris. 
  • hey are betting that their small team can outmaneuver Silicon Valley titans by finding more efficient ways to build and deploy AI systems. And they want to do it in part by giving away many of their AI systems as open-source software.
  • Eric Boyd, corporate vice president of Microsoft’s AI platform, said Mistral presents an intriguing test of how far clever engineering can push AI systems. “So where else can you go?” he asked. “That remains to be seen.”
  • ...7 more annotations...
  • Mensch said his new model cost less than €20 million, the equivalent of roughly $22 million, to train. By contrast OpenAI Chief Executive Sam Altman said last year after the release of GPT-4 that training his company’s biggest models cost “much more than” $50 million to $100 million.
  • Brave Software made a free, open-source model from Mistral the default to power its web-browser chatbot, said Brian Bondy, Brave’s co-founder and chief technology officer. He said that the company finds the quality comparable with proprietary models, and Mistral’s open-source approach also lets Brave control the model locally.
  • “We want to be the most capital-efficient company in the world of AI,” Mensch said. “That’s the reason we exist.” 
  • Mensch joined the Google AI unit then called DeepMind in late 2020, where he worked on the team building so-called large language models, the type of AI system that would later power ChatGPT. By 2022, he was one of the lead authors of a paper about a new AI model called Chinchilla, which changed the field’s understanding of the relationship among the size of an AI model, how much data is used to build it and how well it performs, known as AI scaling laws.
  • Mensch took a role lobbying French policymakers, including French President Emmanuel Macron, against certain elements of the European Union’s new AI Act, which Mensch warned could slow down companies and would, in his view, do nothing to make AI safer. After changes to the text in Brussels, it will be a manageable burden for Mistral, Mensch says, even if he thinks the law should have remained focused on how AI is used rather than also regulating the underlying technology.  
  • For Mensch and his co-founders, releasing their initial AI systems as open source that anyone could use or adapt free of charge was an important principle. It was also a way to get noticed by developers and potential clients eager for more control over the AI they use
  • Mistral’s most advanced models, including the one unveiled Monday, aren’t available open source. 
Javier E

Opinion | The 100-Year Extinction Panic Is Back, Right on Schedule - The New York Times - 0 views

  • The literary scholar Paul Saint-Amour has described the expectation of apocalypse — the sense that all history’s catastrophes and geopolitical traumas are leading us to “the prospect of an even more devastating futurity” — as the quintessential modern attitude. It’s visible everywhere in what has come to be known as the polycrisis.
  • Climate anxiety, of the sort expressed by that student, is driving new fields in psychology, experimental therapies and debates about what a recent New Yorker article called “the morality of having kids in a burning, drowning world.”
  • The conviction that the human species could be on its way out, extinguished by our own selfishness and violence, may well be the last bipartisan impulse.
  • ...28 more annotations...
  • a major extinction panic happened 100 years ago, and the similarities are unnerving.
  • The 1920s were also a period when the public — traumatized by a recent pandemic, a devastating world war and startling technological developments — was gripped by the conviction that humanity might soon shuffle off this mortal coil.
  • It also helps us see how apocalyptic fears feed off the idea that people are inherently violent, self-interested and hierarchical and that survival is a zero-sum war over resources.
  • Either way, it’s a cynical view that encourages us to take our demise as a foregone conclusion.
  • What makes an extinction panic a panic is the conviction that humanity is flawed and beyond redemption, destined to die at its own hand, the tragic hero of a terrestrial pageant for whom only one final act is possible
  • What the history of prior extinction panics has to teach us is that this pessimism is both politically questionable and questionably productive. Our survival will depend on our ability to recognize and reject the nihilistic appraisals of humanity that inflect our fears for the future, both left and right.
  • As a scholar who researches the history of Western fears about human extinction, I’m often asked how I avoid sinking into despair. My answer is always that learning about the history of extinction panics is actually liberating, even a cause for optimism
  • Nearly every generation has thought its generation was to be the last, and yet the human species has persisted
  • As a character in Jeanette Winterson’s novel “The Stone Gods” says, “History is not a suicide note — it is a record of our survival.”
  • Contrary to the folk wisdom that insists the years immediately after World War I were a period of good times and exuberance, dark clouds often hung over the 1920s. The dread of impending disaster — from another world war, the supposed corruption of racial purity and the prospect of automated labor — saturated the period
  • The previous year saw the publication of the first of several installments of what many would come to consider his finest literary achievement, “The World Crisis,” a grim retrospective of World War I that laid out, as Churchill put it, the “milestones to Armageddon.
  • Bluntly titled “Shall We All Commit Suicide?,” the essay offered a dismal appraisal of humanity’s prospects. “Certain somber facts emerge solid, inexorable, like the shapes of mountains from drifting mist,” Churchill wrote. “Mankind has never been in this position before. Without having improved appreciably in virtue or enjoying wiser guidance, it has got into its hands for the first time the tools by which it can unfailingly accomplish its own extermination.”
  • The essay — with its declaration that “the story of the human race is war” and its dismay at “the march of science unfolding ever more appalling possibilities” — is filled with right-wing pathos and holds out little hope that mankind might possess the wisdom to outrun the reaper. This fatalistic assessment was shared by many, including those well to Churchill’s left.
  • “Are not we and they and all the race still just as much adrift in the current of circumstances as we were before 1914?” he wondered. Wells predicted that our inability to learn from the mistakes of the Great War would “carry our race on surely and inexorably to fresh wars, to shortages, hunger, miseries and social debacles, at last either to complete extinction or to a degradation beyond our present understanding.” Humanity, the don of sci-fi correctly surmised, was rushing headlong into a “scientific war” that would “make the biggest bombs of 1918 seem like little crackers.”
  • The pathbreaking biologist J.B.S. Haldane, another socialist, concurred with Wells’s view of warfare’s ultimate destination. In 1925, two decades before the Trinity test birthed an atomic sun over the New Mexico desert, Haldane, who experienced bombing firsthand during World War I, mused, “If we could utilize the forces which we now know to exist inside the atom, we should have such capacities for destruction that I do not know of any agency other than divine intervention which would save humanity from complete and peremptory annihilation.”
  • F.C.S. Schiller, a British philosopher and eugenicist, summarized the general intellectual atmosphere of the 1920s aptly: “Our best prophets are growing very anxious about our future. They are afraid we are getting to know too much and are likely to use our knowledge to commit suicide.”
  • Many of the same fears that keep A.I. engineers up at night — calibrating thinking machines to human values, concern that our growing reliance on technology might sap human ingenuity and even trepidation about a robot takeover — made their debut in the early 20th century.
  • The popular detective novelist R. Austin Freeman’s 1921 political treatise, “Social Decay and Regeneration,” warned that our reliance on new technologies was driving our species toward degradation and even annihilation
  • Extinction panics are, in both the literal and the vernacular senses, reactionary, animated by the elite’s anxiety about maintaining its privilege in the midst of societal change
  • There is a perverse comfort to dystopian thinking. The conviction that catastrophe is baked in relieves us of the moral obligation to act. But as the extinction panic of the 1920s shows us, action is possible, and these panics can recede
  • To whatever extent, then, that the diagnosis proved prophetic, it’s worth asking if it might have been at least partly self-fulfilling.
  • today’s problems are fundamentally new. So, too, must be our solutions
  • It is a tired observation that those who don’t know history are destined to repeat it. We live in a peculiar moment in which this wisdom is precisely inverted. Making it to the next century may well depend on learning from and repeating the tightrope walk — between technological progress and self-annihilation — that we have been doing for the past 100 years
  • We have gotten into the dangerous habit of outsourcing big issues — space exploration, clean energy, A.I. and the like — to private businesses and billionaires
  • That ideologically varied constellation of prominent figures shared a basic diagnosis of humanity and its prospects: that our species is fundamentally vicious and selfish and our destiny therefore bends inexorably toward self-destruction.
  • Less than a year after Churchill’s warning about the future of modern combat — “As for poison gas and chemical warfare,” he wrote, “only the first chapter has been written of a terrible book” — the 1925 Geneva Protocol was signed, an international agreement banning the use of chemical or biological weapons in combat. Despite the many horrors of World War II, chemical weapons were not deployed on European battlefields.
  • As for machine-age angst, there’s a lesson to learn there, too: Our panics are often puffed up, our predictions simply wrong
  • In 1928, H.G. Wells published a book titled “The Way the World Is Going,” with the modest subtitle “Guesses and Forecasts of the Years Ahead.” In the opening pages, he offered a summary of his age that could just as easily have been written about our turbulent 2020s. “Human life,” he wrote, “is different from what it has ever been before, and it is rapidly becoming more different.” He continued, “Perhaps never in the whole history of life before the present time, has there been a living species subjected to so fiercely urgent, many-sided and comprehensive a process of change as ours today. None at least that has survived. Transformation or extinction have been nature’s invariable alternatives. Ours is a species in an intense phase of transition.”
Javier E

OpenAI Just Gave Away the Entire Game - The Atlantic - 0 views

  • If you’re looking to understand the philosophy that underpins Silicon Valley’s latest gold rush, look no further than OpenAI’s Scarlett Johansson debacle.
  • the situation is also a tidy microcosm of the raw deal at the center of generative AI, a technology that is built off data scraped from the internet, generally without the consent of creators or copyright owners. Multiple artists and publishers, including The New York Times, have sued AI companies for this reason, but the tech firms remain unchastened, prevaricating when asked point-blank about the provenance of their training data.
  • At the core of these deflections is an implication: The hypothetical superintelligence they are building is too big, too world-changing, too important for prosaic concerns such as copyright and attribution. The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.
  • ...7 more annotations...
  • Altman and OpenAI have been candid on this front. The end goal of OpenAI has always been to build a so-called artificial general intelligence, or AGI, that would, in their imagining, alter the course of human history forever, ushering in an unthinkable revolution of productivity and prosperity—a utopian world where jobs disappear, replaced by some form of universal basic income, and humanity experiences quantum leaps in science and medicine. (Or, the machines cause life on Earth as we know it to end.) The stakes, in this hypothetical, are unimaginably high—all the more reason for OpenAI to accelerate progress by any means necessary.
  • As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • In response to one question about AGI rendering jobs obsolete, Jeff Wu, an engineer for the company, confessed, “It’s kind of deeply unfair that, you know, a group of people can just build AI and take everyone’s jobs away, and in some sense, there’s nothing you can do to stop them right now.” He added, “I don’t know. Raise awareness, get governments to care, get other people to care. Yeah. Or join us and have one of the few remaining jobs. I don’t know; it’s rough.”
  • Part of Altman’s reasoning, he told Andersen, is that AI development is a geopolitical race against autocracies like China. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than that of “authoritarian governments,” he said. He noted that, in an ideal world, AI should be a product of nations. But in this world, Altman seems to view his company as akin to its own nation-state.
  • Wu’s colleague Daniel Kokotajlo jumped in with the justification. “To add to that,” he said, “AGI is going to create tremendous wealth. And if that wealth is distributed—even if it’s not equitably distributed, but the closer it is to equitable distribution, it’s going to make everyone incredibly wealthy.”
  • This is the unvarnished logic of OpenAI. It is cold, rationalist, and paternalistic. That such a small group of people should be anointed to build a civilization-changing technology is inherently unfair, they note. And yet they will carry on because they have both a vision for the future and the means to try to bring it to fruition
  • Wu’s proposition, which he offers with a resigned shrug in the video, is telling: You can try to fight this, but you can’t stop it. Your best bet is to get on board.
Javier E

Elon Musk's Latest Dust-Up: What Does 'Science' Even Mean? - WSJ - 0 views

  • Elon Musk is racing to a sci-fi future while the AI chief at Meta Platforms is arguing for one rooted in the traditional scientific approach.
  • Meta’s top AI scientist, Yann LeCun, criticized the rival company and Musk himself. 
  • Musk turned to a favorite rebuttal—a veiled suggestion that the executive, who is also a high-profile professor, wasn’t accomplishing much: “What ‘science’ have you done in the past 5 years?”
  • ...20 more annotations...
  • “Over 80 technical papers published since January 2022,” LeCun responded. “What about you?”
  • To which Musk posted: “That’s nothing, you’re going soft. Try harder!
  • At stake are the hearts and minds of AI experts—academic and otherwise—needed to usher in the technology
  • “Join xAI,” LeCun wrote, “if you can stand a boss who:– claims that what you are working on will be solved next year (no pressure).– claims that what you are working on will kill everyone and must be stopped or paused (yay, vacation for 6 months!).– claims to want a ‘maximally rigorous pursuit of the truth’ but spews crazy-ass conspiracy theories on his own social platform.”
  • Some read Musk’s “science” dig as dismissing the role research has played for a generation of AI experts. For years, the Metas and Googles of the world have hired the top minds in AI from universities, indulging their desires to keep a foot in both worlds by allowing them to release their research publicly, while also trying to deploy products. 
  • For an academic such as LeCun, published research, whether peer-reviewed or not, allowed ideas to flourish and reputations to be built, which in turn helped build stars in the system.
  • LeCun has been at Meta since 2013 while serving as an NYU professor since 2003. His tweets suggest he subscribes to the philosophy that one’s work needs to be published—put through the rigors of being shown to be correct and reproducible—to really be considered science. 
  • “If you do research and don’t publish, it’s not Science,” he posted in a lengthy tweet Tuesday rebutting Musk. “If you never published your research but somehow developed it into a product, you might die rich,” he concluded. “But you’ll still be a bit bitter and largely forgotten.” 
  • After pushback, he later clarified in another post: “What I *AM* saying is that science progresses through the collision of ideas, verification, analysis, reproduction, and improvements. If you don’t publish your research *in some way* your research will likely have no impact.”
  • The spat inspired debate throughout the scientific community. “What is science?” Nature, a scientific journal, asked in a headline about the dust-up.
  • Others, such as Palmer Luckey, a former Facebook executive and founder of Anduril Industries, a defense startup, took issue with LeCun’s definition of science. “The extreme arrogance and elitism is what people have a problem with,” he tweeted.
  • For Musk, who prides himself on his physics-based viewpoint and likes to tout how he once aspired to work at a particle accelerator in pursuit of the universe’s big questions, LeCun’s definition of science might sound too ivory-tower. 
  • Musk has blamed universities for helping promote what he sees as overly liberal thinking and other symptoms of what he calls the Woke Mind Virus. 
  • Over the years, an appeal of working for Musk has been the impression that his companies move quickly, filled with engineers attracted to tackling hard problems and seeing their ideas put into practice.
  • “I’ve teamed up with Elon to see if we can actually apply these new technologies to really make a dent in our understanding of the universe,” Igor Babuschkin, an AI expert who worked at OpenAI and Google’s DeepMind, said last year as part of announcing xAI’s mission. 
  • The creation of xAI quickly sent ripples through the AI labor market, with one rival complaining it was hard to compete for potential candidates attracted to Musk and his reputation for creating value
  • that was before xAI’s latest round raised billions of dollars, putting its valuation at $24 billion, kicking off a new recruiting drive. 
  • It was already a seller’s market for AI talent, with estimates that there might be only a couple hundred people out there qualified to deal with certain pressing challenges in the industry and that top candidates can easily earn compensation packages worth $1 million or more
  • Since the launch, Musk has been quick to criticize competitors for what he perceived as liberal biases in rival AI chatbots. His pitch of xAI being the anti-woke bastion seems to have worked to attract some like-minded engineers.
  • As for Musk’s final response to LeCun’s defense of research, he posted a meme featuring Pepé Le Pew that read: “my honest reaction.”
« First ‹ Previous 141 - 147 of 147
Showing 20 items per page