Skip to main content

Home/ History Readings/ Group items tagged artificial

Rss Feed Group items tagged

Javier E

The new tech worldview | The Economist - 0 views

  • Sam Altman is almost supine
  • the 37-year-old entrepreneur looks about as laid-back as someone with a galloping mind ever could. Yet the ceo of OpenAi, a startup reportedly valued at nearly $20bn whose mission is to make artificial intelligence a force for good, is not one for light conversation
  • Joe Lonsdale, 40, is nothing like Mr Altman. He’s sitting in the heart of Silicon Valley, dressed in linen with his hair slicked back. The tech investor and entrepreneur, who has helped create four unicorns plus Palantir, a data-analytics firm worth around $15bn that works with soldiers and spooks
  • ...25 more annotations...
  • a “builder class”—a brains trust of youngish idealists, which includes Patrick Collison, co-founder of Stripe, a payments firm valued at $74bn, and other (mostly white and male) techies, who are posing questions that go far beyond the usual interests of Silicon Valley’s titans. They include the future of man and machine, the constraints on economic growth, and the nature of government.
  • They share other similarities. Business provided them with their clout, but doesn’t seem to satisfy their ambition
  • The number of techno-billionaires in America (Mr Collison included) has more than doubled in a decade.
  • ome of them, like the Medicis in medieval Florence, are keen to use their money to bankroll the intellectual ferment
  • The other is Paul Graham, co-founder of Y Combinator, a startup accelerator, whose essays on everything from cities to politics are considered required reading on tech campuses.
  • Mr Altman puts it more optimistically: “The iPhone and cloud computing enabled a Cambrian explosion of new technology. Some things went right and some went wrong. But one thing that went weirdly right is a lot of people got rich and said ‘OK, now what?’”
  • A belief that with money and brains they can reboot social progress is the essence of this new mindset, making it resolutely upbeat
  • The question is: are the rest of them further evidence of the tech industry’s hubristic decadence? Or do they reflect the start of a welcome capacity for renewal?
  • Two well-known entrepreneurs from that era provided the intellectual seed capital for some of today’s techno nerds.
  • Mr Thiel, a would-be libertarian philosopher and investor
  • This cohort of eggheads starts from common ground: frustration with what they see as sluggish progress in the world around them.
  • Yet the impact could ultimately be positive. Frustrations with a sluggish society have encouraged them to put their money and brains to work on problems from science funding and the redistribution of wealth to entirely new universities. Their exaltation of science may encourage a greater focus on hard tech
  • the rationalist movement has hit the mainstream. The result is a fascination with big ideas that its advocates believe goes beyond simply rose-tinted tech utopianism
  • A burgeoning example of this is “progress studies”, a movement that Mr Collison and Tyler Cowen, an economist and seer of the tech set, advocated for in an article in the Atlantic in 2019
  • Progress, they think, is a combination of economic, technological and cultural advancement—and deserves its own field of study
  • There are other examples of this expansive worldview. In an essay in 2021 Mr Altman set out a vision that he called “Moore’s Law for Everything”, based on similar logic to the semiconductor revolution. In it, he predicted that smart machines, building ever smarter replacements, would in the coming decades outcompete humans for work. This would create phenomenal wealth for some, obliterate wages for others, and require a vast overhaul of taxation and redistribution
  • His two bets, on OpenAI and nuclear fusion, have become fashionable of late—the former’s chatbot, ChatGPT, is all the rage. He has invested $375m in Helion, a company that aims to build a fusion reactor.
  • Mr Lonsdale, who shares a libertarian streak with Mr Thiel, has focused attention on trying to fix the shortcomings of society and government. In an essay this year called “In Defence of Us”, he argues against “historical nihilism”, or an excessive focus on the failures of the West.
  • With a soft spot for Roman philosophy, he has created the Cicero Institute in Austin that aims to inject free-market principles such as competition and transparency into public policy.
  • He is also bringing the startup culture to academia, backing a new place of learning called the University of Austin, which emphasises free speech.
  • All three have business ties to their mentors. As a teen, Mr Altman was part of the first cohort of founders in Mr Graham’s Y Combinator, which went on to back successes such as Airbnb and Dropbox. In 2014 he replaced him as its president, and for a while counted Mr Thiel as a partner (Mr Altman keeps an original manuscript of Mr Thiel’s book “Zero to One” in his library). Mr Thiel was also an early backer of Stripe, founded by Mr Collison and his brother, John. Mr Graham saw promise in Patrick Collison while the latter was still at school. He was soon invited to join Y Combinator. Mr Graham remains a fan: “If you dropped Patrick on a desert island, he would figure out how to reproduce the Industrial Revolution,”
  • While at university, Mr Lonsdale edited the Stanford Review, a contrarian publication co-founded by Mr Thiel. He went on to work for his mentor and the two men eventually helped found Palantir. He still calls Mr Thiel “a genius”—though he claims these days to be less “cynical” than his guru.
  • “The tech industry has always told these grand stories about itself,” says Adrian Daub of Stanford University and author of the book, “What Tech Calls Thinking”. Mr Daub sees it as a way of convincing recruits and investors to bet on their risky projects. “It’s incredibly convenient for their business models.”
  • In the 2000s Mr Thiel supported the emergence of a small community of online bloggers, self-named the “rationalists”, who were focused on removing cognitive biases from thinking (Mr Thiel has since distanced himself). That intellectual heritage dates even further back, to “cypherpunks”, who noodled about cryptography, as well as “extropians”, who believed in improving the human condition through life extensions
  • Silicon Valley has shown an uncanny ability to reinvent itself in the past.
Javier E

AI fears are reaching the top levels of finance and law - The Washington Post - 0 views

  • In a report released last week, the forum said that its survey of 1,500 policymakers and industry leaders found that fake news and propaganda written and boosted by AI chatbots is the biggest short-term risk to the global economy. Around half of the world’s population is participating in elections this year in countries including the United States, Mexico, Indonesia and Pakistan and disinformation researchers are concerned AI will make it easier for people to spread false information and increase societal conflict.
  • AI also may be no better than humans at spotting unlikely dangers or “tail risks,” said Allen. Before 2008, few people on Wall Street foresaw the end of the housing bubble. One reason was that since housing prices had never declined nationwide before, Wall Street’s models assumed such a uniform decline would never occur. Even the best AI systems are only as good as the data they are based on, Allen said.
  • As AI grows more complex and capable, some experts worry about “black box” automation that is unable to explain how it arrived at a decision, leaving humans uncertain about its soundness. Poorly designed or managed systems could undermine the trust between buyer and seller that is required for any financial transaction
  • ...2 more annotations...
  • Other pundits and entrepreneurs say concerns about the tech are overblown and risk pushing regulators to block innovations that could help people and boost tech company profits.
  • Last year, politicians and policymakers around the world also grappled to make sense of how AI will fit into society. Congress held multiple hearings. President Biden issued an executive order saying AI was the “most consequential technology of our time.” The United Kingdom convened a global AI forum where Prime Minister Rishi Sunak warned that “humanity could lose control of AI completely.” The concerns include the risk that “generative” AI — which can create text, video, images and audio — can be used to create misinformation, displace jobs or even help people create dangerous bioweapons.
Javier E

The Lesson of 1975 for Today's Pessimists - WSJ - 0 views

  • out of the depths of the inflation-riddled ’70s came the democratization of computing and finance. It feels to me as if we’re at a similar point. What’s going to be democratized next?
  • Start with quantum computing, autonomous vehicles and delivery drones. Even the once-in-a-generation innovation of machine learning and artificial intelligence is generating fear and doubt. Like homebrew computers, we’re at the rudimentary stage.
  • Especially in medicine. Healthcare pricing, billing and reimbursements are completely nonsensical. ObamaCare made it worse, but change is beginning. Pandemic-enabled telemedicine is a crack in the old way’s armor. Self-directed healthcare will grow. Ozempic and magic pills are changing lives. Crispr gene editing is also rudimentary but could extend healthy life expectancies. Add precision oncology, computational biology, focused ultrasound and more. The upside is endless.
  • ...2 more annotations...
  • AI will usher in knowledgeable and friendly automated customer service any day now. But there is so much else on the innovation horizon: osmotic energy, geothermal, nuclear fusion, autonomous farming, photonic computing, human longevity. Plus all the stuff in research labs we haven’t heard of yet, let alone invented and brought to market.
  • Every industry is about to change, which will defy skeptics. Figure out how, and then, as Mr. Wozniak suggests, get your hands dirty. As always, the pain point is cost. Look for things that get cheaper—that’s the only way to clear the smoke and get new marvels into global consumer hands.
Javier E

In defense of science fiction - by Noah Smith - Noahpinion - 0 views

  • I’m a big fan of science fiction (see my list of favorites from last week)! So when people start bashing the genre, I tend to leap to its defense
  • this time, the people doing the bashing are some serious heavyweights themselves — Charles Stross, the celebrated award-winning sci-fi author, and Tyler Austin Harper, a professor who studies science fiction for a living
  • The two critiques center around the same idea — that rich people have misused sci-fi, taking inspiration from dystopian stories and working to make those dystopias a reality.
  • ...14 more annotations...
  • [Science fiction’s influence]…leaves us facing a future we were all warned about, courtesy of dystopian novels mistaken for instruction manuals…[T]he billionaires behind the steering wheel have mistaken cautionary tales and entertainments for a road map, and we’re trapped in the passenger seat.
  • t even then it would be hard to argue exogeneity, since censorship is a response to society’s values as well as a potential cause of them.
  • Stross is alleging that the billionaires are getting Gernsback and Campbell’s intentions exactly right. His problem is simply that Gernsback and Campbell were kind of right-wing, at least by modern standards, and he’s worried that their sci-fi acted as propaganda for right-wing ideas.
  • The question of whether literature has a political effect is an empirical one — and it’s a very difficult empirical one. It’s extremely hard to test the hypothesis that literature exerts a diffuse influence on the values and preconceptions of the citizenry
  • I think Stross really doesn’t come up with any credible examples of billionaires mistaking cautionary tales for road maps. Instead, most of his article focuses on a very different critique — the idea that sci-fi authors inculcate rich technologists with bad values and bad visions of what the future ought to look like:
  • I agree that the internet and cell phones have had an ambiguous overall impact on human welfare. If modern technology does have a Torment Nexus, it’s the mobile-social nexus that keeps us riveted to highly artificial, attenuated parasocial interactions for every waking hour of our day. But these technologies are still very young, and it remains to be seen whether the ways in which we use them will get better or worse over time.
  • There are very few technologies — if any — whose impact we can project into the far future at the moment of their inception. So unless you think our species should just refuse to create any new technology at all, you have to accept that each one is going to be a bit of a gamble.
  • As for weapons of war, those are clearly bad in terms of their direct effects on the people on the receiving end. But it’s possible that more powerful weapons — such as the atomic bomb — serve to deter more deaths than they cause
  • yes, AI is risky, but the need to manage and limit risk is a far cry from the litany of negative assumptions and extrapolations that often gets flung in the technology’s directio
  • I think the main problem with Harper’s argument is simply techno-pessimism. So far, technology’s effects on humanity have been mostly good, lifting us up from the muck of desperate poverty and enabling the creation of a healthier, more peaceful, more humane world. Any serious discussion of the effects of innovation on society must acknowledge that. We might have hit an inflection point where it all goes downhill from here, and future technologies become the Torment Nexuses that we’ve successfully avoided in the past. But it’s very premature to assume we’ve hit that point.
  • I understand that the 2020s are an exhausted age, in which we’re still reeling from the social ructions of the 2010s. I understand that in such a weary and fearful condition, it’s natural to want to slow the march of technological progress as a proxy for slowing the headlong rush of social progress
  • And I also understand how easy it is to get negatively polarized against billionaires, and any technologies that billionaires invent, and any literature that billionaires like to read.
  • But at a time when we’re creating vaccines against cancer and abundant clean energy and any number of other life-improving and productivity-boosting marvels, it’s a little strange to think that technology is ruining the world
  • The dystopian elements of modern life are mostly just prosaic, old things — political demagogues, sclerotic industries, social divisions, monopoly power, environmental damage, school bullies, crime, opiates, and so on
Javier E

'Humanity's remaining timeline? It looks more like five years than 50': meet the neo-lu... - 0 views

  • A few weeks back, in January, the largest-ever survey of AI researchers found that 16% of them believed their work would lead to the extinction of humankind.
  • “That’s a one-in-six chance of catastrophe,” says Alistair Stewart, a former British soldier turned master’s student. “That’s Russian-roulette odds.”
  • What would the others have us do? Stewart, the soldier turned grad student, wants a moratorium on the development of AIs until we understand them better – until those Russian-roulette-like odds improve. Yudkowsky would have us freeze everything today, this instant. “You could say that nobody’s allowed to train something more powerful than GPT-4,” he suggests. “Humanity could decide not to die and it would not be that hard.”
Javier E

Mistral, the 9-Month-Old AI Startup Challenging Silicon Valley's Giants - WSJ - 0 views

  • Mensch, who started in academia, has spent much of his life figuring out how to make AI and machine-learning systems more efficient. Early last year, he joined forces with co-founders Timothée Lacroix, 32, and Guillaume Lample, 33, who were then at Meta Platforms’ artificial-intelligence lab in Paris. 
  • hey are betting that their small team can outmaneuver Silicon Valley titans by finding more efficient ways to build and deploy AI systems. And they want to do it in part by giving away many of their AI systems as open-source software.
  • Eric Boyd, corporate vice president of Microsoft’s AI platform, said Mistral presents an intriguing test of how far clever engineering can push AI systems. “So where else can you go?” he asked. “That remains to be seen.”
  • ...7 more annotations...
  • Mensch said his new model cost less than €20 million, the equivalent of roughly $22 million, to train. By contrast OpenAI Chief Executive Sam Altman said last year after the release of GPT-4 that training his company’s biggest models cost “much more than” $50 million to $100 million.
  • Brave Software made a free, open-source model from Mistral the default to power its web-browser chatbot, said Brian Bondy, Brave’s co-founder and chief technology officer. He said that the company finds the quality comparable with proprietary models, and Mistral’s open-source approach also lets Brave control the model locally.
  • “We want to be the most capital-efficient company in the world of AI,” Mensch said. “That’s the reason we exist.” 
  • Mensch joined the Google AI unit then called DeepMind in late 2020, where he worked on the team building so-called large language models, the type of AI system that would later power ChatGPT. By 2022, he was one of the lead authors of a paper about a new AI model called Chinchilla, which changed the field’s understanding of the relationship among the size of an AI model, how much data is used to build it and how well it performs, known as AI scaling laws.
  • Mensch took a role lobbying French policymakers, including French President Emmanuel Macron, against certain elements of the European Union’s new AI Act, which Mensch warned could slow down companies and would, in his view, do nothing to make AI safer. After changes to the text in Brussels, it will be a manageable burden for Mistral, Mensch says, even if he thinks the law should have remained focused on how AI is used rather than also regulating the underlying technology.  
  • For Mensch and his co-founders, releasing their initial AI systems as open source that anyone could use or adapt free of charge was an important principle. It was also a way to get noticed by developers and potential clients eager for more control over the AI they use
  • Mistral’s most advanced models, including the one unveiled Monday, aren’t available open source. 
Javier E

Opinion | The 100-Year Extinction Panic Is Back, Right on Schedule - The New York Times - 0 views

  • The literary scholar Paul Saint-Amour has described the expectation of apocalypse — the sense that all history’s catastrophes and geopolitical traumas are leading us to “the prospect of an even more devastating futurity” — as the quintessential modern attitude. It’s visible everywhere in what has come to be known as the polycrisis.
  • Climate anxiety, of the sort expressed by that student, is driving new fields in psychology, experimental therapies and debates about what a recent New Yorker article called “the morality of having kids in a burning, drowning world.”
  • The conviction that the human species could be on its way out, extinguished by our own selfishness and violence, may well be the last bipartisan impulse.
  • ...28 more annotations...
  • a major extinction panic happened 100 years ago, and the similarities are unnerving.
  • The 1920s were also a period when the public — traumatized by a recent pandemic, a devastating world war and startling technological developments — was gripped by the conviction that humanity might soon shuffle off this mortal coil.
  • It also helps us see how apocalyptic fears feed off the idea that people are inherently violent, self-interested and hierarchical and that survival is a zero-sum war over resources.
  • Either way, it’s a cynical view that encourages us to take our demise as a foregone conclusion.
  • What makes an extinction panic a panic is the conviction that humanity is flawed and beyond redemption, destined to die at its own hand, the tragic hero of a terrestrial pageant for whom only one final act is possible
  • What the history of prior extinction panics has to teach us is that this pessimism is both politically questionable and questionably productive. Our survival will depend on our ability to recognize and reject the nihilistic appraisals of humanity that inflect our fears for the future, both left and right.
  • As a scholar who researches the history of Western fears about human extinction, I’m often asked how I avoid sinking into despair. My answer is always that learning about the history of extinction panics is actually liberating, even a cause for optimism
  • Nearly every generation has thought its generation was to be the last, and yet the human species has persisted
  • As a character in Jeanette Winterson’s novel “The Stone Gods” says, “History is not a suicide note — it is a record of our survival.”
  • Contrary to the folk wisdom that insists the years immediately after World War I were a period of good times and exuberance, dark clouds often hung over the 1920s. The dread of impending disaster — from another world war, the supposed corruption of racial purity and the prospect of automated labor — saturated the period
  • The previous year saw the publication of the first of several installments of what many would come to consider his finest literary achievement, “The World Crisis,” a grim retrospective of World War I that laid out, as Churchill put it, the “milestones to Armageddon.
  • Bluntly titled “Shall We All Commit Suicide?,” the essay offered a dismal appraisal of humanity’s prospects. “Certain somber facts emerge solid, inexorable, like the shapes of mountains from drifting mist,” Churchill wrote. “Mankind has never been in this position before. Without having improved appreciably in virtue or enjoying wiser guidance, it has got into its hands for the first time the tools by which it can unfailingly accomplish its own extermination.”
  • The essay — with its declaration that “the story of the human race is war” and its dismay at “the march of science unfolding ever more appalling possibilities” — is filled with right-wing pathos and holds out little hope that mankind might possess the wisdom to outrun the reaper. This fatalistic assessment was shared by many, including those well to Churchill’s left.
  • “Are not we and they and all the race still just as much adrift in the current of circumstances as we were before 1914?” he wondered. Wells predicted that our inability to learn from the mistakes of the Great War would “carry our race on surely and inexorably to fresh wars, to shortages, hunger, miseries and social debacles, at last either to complete extinction or to a degradation beyond our present understanding.” Humanity, the don of sci-fi correctly surmised, was rushing headlong into a “scientific war” that would “make the biggest bombs of 1918 seem like little crackers.”
  • The pathbreaking biologist J.B.S. Haldane, another socialist, concurred with Wells’s view of warfare’s ultimate destination. In 1925, two decades before the Trinity test birthed an atomic sun over the New Mexico desert, Haldane, who experienced bombing firsthand during World War I, mused, “If we could utilize the forces which we now know to exist inside the atom, we should have such capacities for destruction that I do not know of any agency other than divine intervention which would save humanity from complete and peremptory annihilation.”
  • F.C.S. Schiller, a British philosopher and eugenicist, summarized the general intellectual atmosphere of the 1920s aptly: “Our best prophets are growing very anxious about our future. They are afraid we are getting to know too much and are likely to use our knowledge to commit suicide.”
  • Many of the same fears that keep A.I. engineers up at night — calibrating thinking machines to human values, concern that our growing reliance on technology might sap human ingenuity and even trepidation about a robot takeover — made their debut in the early 20th century.
  • The popular detective novelist R. Austin Freeman’s 1921 political treatise, “Social Decay and Regeneration,” warned that our reliance on new technologies was driving our species toward degradation and even annihilation
  • Extinction panics are, in both the literal and the vernacular senses, reactionary, animated by the elite’s anxiety about maintaining its privilege in the midst of societal change
  • There is a perverse comfort to dystopian thinking. The conviction that catastrophe is baked in relieves us of the moral obligation to act. But as the extinction panic of the 1920s shows us, action is possible, and these panics can recede
  • To whatever extent, then, that the diagnosis proved prophetic, it’s worth asking if it might have been at least partly self-fulfilling.
  • today’s problems are fundamentally new. So, too, must be our solutions
  • It is a tired observation that those who don’t know history are destined to repeat it. We live in a peculiar moment in which this wisdom is precisely inverted. Making it to the next century may well depend on learning from and repeating the tightrope walk — between technological progress and self-annihilation — that we have been doing for the past 100 years
  • We have gotten into the dangerous habit of outsourcing big issues — space exploration, clean energy, A.I. and the like — to private businesses and billionaires
  • That ideologically varied constellation of prominent figures shared a basic diagnosis of humanity and its prospects: that our species is fundamentally vicious and selfish and our destiny therefore bends inexorably toward self-destruction.
  • Less than a year after Churchill’s warning about the future of modern combat — “As for poison gas and chemical warfare,” he wrote, “only the first chapter has been written of a terrible book” — the 1925 Geneva Protocol was signed, an international agreement banning the use of chemical or biological weapons in combat. Despite the many horrors of World War II, chemical weapons were not deployed on European battlefields.
  • As for machine-age angst, there’s a lesson to learn there, too: Our panics are often puffed up, our predictions simply wrong
  • In 1928, H.G. Wells published a book titled “The Way the World Is Going,” with the modest subtitle “Guesses and Forecasts of the Years Ahead.” In the opening pages, he offered a summary of his age that could just as easily have been written about our turbulent 2020s. “Human life,” he wrote, “is different from what it has ever been before, and it is rapidly becoming more different.” He continued, “Perhaps never in the whole history of life before the present time, has there been a living species subjected to so fiercely urgent, many-sided and comprehensive a process of change as ours today. None at least that has survived. Transformation or extinction have been nature’s invariable alternatives. Ours is a species in an intense phase of transition.”
Javier E

OpenAI Just Gave Away the Entire Game - The Atlantic - 0 views

  • If you’re looking to understand the philosophy that underpins Silicon Valley’s latest gold rush, look no further than OpenAI’s Scarlett Johansson debacle.
  • the situation is also a tidy microcosm of the raw deal at the center of generative AI, a technology that is built off data scraped from the internet, generally without the consent of creators or copyright owners. Multiple artists and publishers, including The New York Times, have sued AI companies for this reason, but the tech firms remain unchastened, prevaricating when asked point-blank about the provenance of their training data.
  • At the core of these deflections is an implication: The hypothetical superintelligence they are building is too big, too world-changing, too important for prosaic concerns such as copyright and attribution. The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.
  • ...7 more annotations...
  • Altman and OpenAI have been candid on this front. The end goal of OpenAI has always been to build a so-called artificial general intelligence, or AGI, that would, in their imagining, alter the course of human history forever, ushering in an unthinkable revolution of productivity and prosperity—a utopian world where jobs disappear, replaced by some form of universal basic income, and humanity experiences quantum leaps in science and medicine. (Or, the machines cause life on Earth as we know it to end.) The stakes, in this hypothetical, are unimaginably high—all the more reason for OpenAI to accelerate progress by any means necessary.
  • As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • Part of Altman’s reasoning, he told Andersen, is that AI development is a geopolitical race against autocracies like China. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than that of “authoritarian governments,” he said. He noted that, in an ideal world, AI should be a product of nations. But in this world, Altman seems to view his company as akin to its own nation-state.
  • In response to one question about AGI rendering jobs obsolete, Jeff Wu, an engineer for the company, confessed, “It’s kind of deeply unfair that, you know, a group of people can just build AI and take everyone’s jobs away, and in some sense, there’s nothing you can do to stop them right now.” He added, “I don’t know. Raise awareness, get governments to care, get other people to care. Yeah. Or join us and have one of the few remaining jobs. I don’t know; it’s rough.”
  • Wu’s colleague Daniel Kokotajlo jumped in with the justification. “To add to that,” he said, “AGI is going to create tremendous wealth. And if that wealth is distributed—even if it’s not equitably distributed, but the closer it is to equitable distribution, it’s going to make everyone incredibly wealthy.”
  • This is the unvarnished logic of OpenAI. It is cold, rationalist, and paternalistic. That such a small group of people should be anointed to build a civilization-changing technology is inherently unfair, they note. And yet they will carry on because they have both a vision for the future and the means to try to bring it to fruition
  • Wu’s proposition, which he offers with a resigned shrug in the video, is telling: You can try to fight this, but you can’t stop it. Your best bet is to get on board.
Javier E

Elon Musk's Latest Dust-Up: What Does 'Science' Even Mean? - WSJ - 0 views

  • Elon Musk is racing to a sci-fi future while the AI chief at Meta Platforms is arguing for one rooted in the traditional scientific approach.
  • Meta’s top AI scientist, Yann LeCun, criticized the rival company and Musk himself. 
  • Musk turned to a favorite rebuttal—a veiled suggestion that the executive, who is also a high-profile professor, wasn’t accomplishing much: “What ‘science’ have you done in the past 5 years?”
  • ...20 more annotations...
  • “Over 80 technical papers published since January 2022,” LeCun responded. “What about you?”
  • To which Musk posted: “That’s nothing, you’re going soft. Try harder!
  • At stake are the hearts and minds of AI experts—academic and otherwise—needed to usher in the technology
  • “Join xAI,” LeCun wrote, “if you can stand a boss who:– claims that what you are working on will be solved next year (no pressure).– claims that what you are working on will kill everyone and must be stopped or paused (yay, vacation for 6 months!).– claims to want a ‘maximally rigorous pursuit of the truth’ but spews crazy-ass conspiracy theories on his own social platform.”
  • Some read Musk’s “science” dig as dismissing the role research has played for a generation of AI experts. For years, the Metas and Googles of the world have hired the top minds in AI from universities, indulging their desires to keep a foot in both worlds by allowing them to release their research publicly, while also trying to deploy products. 
  • For an academic such as LeCun, published research, whether peer-reviewed or not, allowed ideas to flourish and reputations to be built, which in turn helped build stars in the system.
  • LeCun has been at Meta since 2013 while serving as an NYU professor since 2003. His tweets suggest he subscribes to the philosophy that one’s work needs to be published—put through the rigors of being shown to be correct and reproducible—to really be considered science. 
  • “If you do research and don’t publish, it’s not Science,” he posted in a lengthy tweet Tuesday rebutting Musk. “If you never published your research but somehow developed it into a product, you might die rich,” he concluded. “But you’ll still be a bit bitter and largely forgotten.” 
  • After pushback, he later clarified in another post: “What I *AM* saying is that science progresses through the collision of ideas, verification, analysis, reproduction, and improvements. If you don’t publish your research *in some way* your research will likely have no impact.”
  • The spat inspired debate throughout the scientific community. “What is science?” Nature, a scientific journal, asked in a headline about the dust-up.
  • Others, such as Palmer Luckey, a former Facebook executive and founder of Anduril Industries, a defense startup, took issue with LeCun’s definition of science. “The extreme arrogance and elitism is what people have a problem with,” he tweeted.
  • For Musk, who prides himself on his physics-based viewpoint and likes to tout how he once aspired to work at a particle accelerator in pursuit of the universe’s big questions, LeCun’s definition of science might sound too ivory-tower. 
  • Musk has blamed universities for helping promote what he sees as overly liberal thinking and other symptoms of what he calls the Woke Mind Virus. 
  • Over the years, an appeal of working for Musk has been the impression that his companies move quickly, filled with engineers attracted to tackling hard problems and seeing their ideas put into practice.
  • “I’ve teamed up with Elon to see if we can actually apply these new technologies to really make a dent in our understanding of the universe,” Igor Babuschkin, an AI expert who worked at OpenAI and Google’s DeepMind, said last year as part of announcing xAI’s mission. 
  • The creation of xAI quickly sent ripples through the AI labor market, with one rival complaining it was hard to compete for potential candidates attracted to Musk and his reputation for creating value
  • that was before xAI’s latest round raised billions of dollars, putting its valuation at $24 billion, kicking off a new recruiting drive. 
  • It was already a seller’s market for AI talent, with estimates that there might be only a couple hundred people out there qualified to deal with certain pressing challenges in the industry and that top candidates can easily earn compensation packages worth $1 million or more
  • Since the launch, Musk has been quick to criticize competitors for what he perceived as liberal biases in rival AI chatbots. His pitch of xAI being the anti-woke bastion seems to have worked to attract some like-minded engineers.
  • As for Musk’s final response to LeCun’s defense of research, he posted a meme featuring Pepé Le Pew that read: “my honest reaction.”
Javier E

Stanford's top disinformation research group collapses under pressure - The Washington ... - 0 views

  • The collapse of the five-year-old Observatory is the latest and largest of a series of setbacks to the community of researchers who try to detect propaganda and explain how false narratives are manufactured, gather momentum and become accepted by various groups
  • It follows Harvard’s dismissal of misinformation expert Joan Donovan, who in a December whistleblower complaint alleged he university’s close and lucrative ties with Facebook parent Meta led the university to clamp down on her work, which was highly critical of the social media giant’s practices.
  • Starbird said that while most academic studies of online manipulation look backward from much later, the Observatory’s “rapid analysis” helped people around the world understand what they were seeing on platforms as it happened.
  • ...9 more annotations...
  • Brown University professor Claire Wardle said the Observatory had created innovative methodology and trained the next generation of experts.
  • “Closing down a lab like this would always be a huge loss, but doing so now, during a year of global elections, makes absolutely no sense,” said Wardle, who previously led research at anti-misinformation nonprofit First Draft. “We need universities to use their resources and standing in the community to stand up to criticism and headlines.”
  • The study of misinformation has become increasingly controversial, and Stamos, DiResta and Starbird have been besieged by lawsuits, document requests and threats of physical harm. Leading the charge has been Rep. Jim Jordan (R-Ohio), whose House subcommittee alleges the Observatory improperly worked with federal officials and social media companies to violate the free-speech rights of conservatives.
  • In a joint statement, Stamos and DiResta said their work involved much more than elections, and that they had been unfairly maligned.
  • “The politically motivated attacks against our research on elections and vaccines have no merit, and the attempts by partisan House committee chairs to suppress First Amendment-protected research are a quintessential example of the weaponization of government,” they said.
  • Stamos founded the Observatory after publicizing that Russia has attempted to influence the 2016 election by sowing division on Facebook, causing a clash with the company’s top executives. Special counsel Robert S. Mueller III later cited the Facebook operation in indicting a Kremlin contractor. At Stanford, Stamos and his team deepened his study of influence operations from around the world, including one it traced to the Pentagon.
  • Stamos told associates he stepped back from leading the Observatory last year in part because the political pressure had taken a toll. Stamos had raised most of the money for the project, and the remaining faculty have not been able to replicate his success, as many philanthropic groups shift their focus on artificial intelligence and other, fresher topics.
  • In supporting the project further, the university would have risked alienating conservative donors, Silicon Valley figures, and members of Congress, who have threatened to stop all federal funding for disinformation research or cut back general support.
  • The Observatory’s non-election work has included developing curriculum for teaching college students about how to handle trust and safety issues on social media platforms and launching the first peer-reviewed journal dedicated to that field. It has also investigated rings publishing child sexual exploitation material online and flaws in the U.S. system for reporting it, helping to prepare platforms to handle an influx of computer-generated material.
« First ‹ Previous 201 - 212 of 212
Showing 20 items per page